2022-09-15

Reachitecting my Homelab with WireGuard

linux networking wireguard kubernetes

Contents

Introduction

Earlier this year, I set up set up a Kubernetes cluster for my own projects. Since then, I also ended up buying an old Lenovo Thinkcentre M700 for my own home, partially to have a dedicated machine to monitor my ping situation. I installed k3s on it to help manage my infrastructure more easily and to play around with k8s a bit more.

With some of the changes brought about by my wifi monitoring setup, the overall picture looked something like this in terms of ping monitoring:

Original Architecture Diagram
Generated via diagrams

I eventually realized that I don't really need to pay extra for the DigitalOcean based managed k8s setup. I could set up a k3s agent node on a DO droplet and use the DO droplet as the frontend server to my homelab Lenovo.

WireGuard Network

However, I didn't really want to expose my Home IP directly to the internet so I had to think about how I could set up a 2-way connection between my DO droplet and my Homelab. I had already been using WireGuard mostly as an easy to set up VPN. For ease of setup, I was using the docker-wireguard in a Docker container. It was really easy to set up and I could easily create droplets/VMs to connect to with my phone/PC.

Unfortunately, attempting to use this with k3s resulted in some issues. The main issue was that the docker container doesn't create a network interface in the host. That resulted in me having to painfully manually set up routes that would route through the docker container. However, even with the manual route set up, k3s was unable to figure out the topology of the network from the side of my homelab. I even manually input the wireguard IP address of my other node, but flannel was unable to route to pods/services on my DO node. That meant that pods on my homelab were unable to connect to any pods in my droplet. There were also errors like this.

Because of that, I resorted to manually setting up my wireguard network. I mostly followede the documentation and set up wg-quick. So it wasn't too bad. Once I did that, I ran

curl -sfL https://get.k3s.io | K3S_URL=homelab_wireguard_ip:6443 K3S_TOKEN=node_token sh -s - agent --node-ip do_droplet_wireguard_ip --flannel-iface=wg0 --node-label network=do-network

Note that previously, I also set up my server node to use --flannel-iface=wg0 but I didn't change my flannel backend. After all this, my droplet was able to function as a k3s node. I honestly should have just tried this directly instead of painfully messing with ip routes.

Other setup

With WireGuard up, the other main thing I had to do was make sure certain pods always scheduled on my DO droplet. The reason was that I wanted to ensure my ping monitoring situation was using the DO droplet and not my local network. In order to do this, I used nodeSelector. Then, I labeled my DO droplet as network=do-network and would add that to my deployment's yaml file. I considered using a DaemonSet, but decided that doing that would be a minor annoyance if I ever installed more nodes (maybe I'll buy more homelab equipment).

Lastly, because I was already serving traffic on ports 80/443 via Caddy on my droplet for my Docker registry, I set my nginx ingress to use a different port. Then, I changed my Caddy instance to reverse proxy to nginx. The downside of this method is that for direct access to my homelab, I could no longer use just type in the IP address of my homelab from my desktop. To solve this, I also installed a Caddy reverse proxy on my homelab to do the same reverse proxy that my DO Droplet's Caddy is doing.

With all that work, my overall architecture became something like this:

New Architecture Diagram
Generated via diagrams

Setting up CI/CD

Like when I set my own cluster for the first time, I consider having CI/CD very important. Previously, I set up various ConfigMaps in my k8s cluster with the git commit SHA of the latest image. In my Github Actions, I would acquire credentials to connect to my cluster through the doctl CLI tool and update the ConfigMaps and then run kubectl -f apply on my config files. This would update my cluster auttomatically.

However, my homelab contains the server node and is not exposed to the public internet. Because of this, I can't easily use kubectl with passed in credentials and modify the server state. My previous process of updating my homelab's k3s was manually running an update script which checked the managed k8s cluster's ConfigMaps and ran kubectl -f apply on my config files.

My first goal was to make it so that Github Actions is able update my ConfigMaps. To achieve this, I created a service (config-map-updater) that would modify the k8s APIs directly within the k3s cluster. It has permission to get/update ConfigMaps. config-map-updater provides an API endpoint to do the required update. This was a good opportunity to try out Connect, a way to create protobuf RPC services easily. RPCs are nicer than having the figure out the right REST verb. I then exposed this RPC service via ingress and my Caddy reverse proxy. With this, Github Actions was able to use the API provided by config-map-updater to update the appropriate ConfigMap (after I created a client with the Connect library). I also implemented auth so that hopefully only I can update ConfigMaps. That solves the initial problem of updating my ConfigMaps.

Next up is figuring how I could update all my various deployments. Even though I had a preexisting script file, I realized that running it within a k8s pod was going to be painful in terms of setting up the correct environment/permissions. So, I opted to create a local service (k3s-updater) on my homelab that would run my preexisting update script. k3s-updater would run as my local user and upon receiving an update RPC, it then executes the aforementioned script file. To prevent a concurrent update from running the script multiple times, I added a mutex. The update RPC is called by the config-map-updater created previously. Thus, Github Actions could call the config-map-updater and config-map-updater would then be able to call k3s-updater.

Lastly, I set up a Service without selectors for my k3s-updater service. This allows pods from inside the k8s cluster to easily connect to it even from the DO droplet by simply connecting to http://k3s-updater. This helps with the ergnomics of the setup.

Thanks to all this, I can now trigger updates from Github Actions as part of CI/CD.

Concluding Thoughts

I am now able to use the internet to connect to my homelab through my droplet and WireGuard. I can now host all my personal projects in my homelab and save the cost of the managed k8s cluster (especially with the recent price increases for DigitalOcean). Furthermore, I was able to have an overall more coherent setup instead of having two different k8s clusters.

Because I have a mere 10ms roundtrip ping to my droplet, viewing pages from the internet is quite speedy. However, the other day when DigitalOcean's network was having some slowdowns and my ping went to 100ms, I definitely noticed the lag (some css files took 3s to load). Hopefully that's not the norm though.

References


Any error corrections or comments can be made by sending me a pull request.

Back