Powered By Kubernetes
I had tried this in the past, but at the time Kubernetes (And Google Container Engine) were too immature to actually run something on. I checked in again a few months later and things were a lot more mature, but still very alpha.
I host a few blogs for people (this one included) and for the past few years have been doing it on an EC2 instance. I wasn't happy with how things were setup, mainly because I had invested so much time in getting this setup originally, that I wasn't looking forward to repeating that process to move someplace else (like Google Compute Engine).
Enter Docker. I started converting my hosting things into their own containers, with the hops of making things easier to test/run and to move to a new provider. But Docker had pain points as well. I still needed to do a lot of work on the host side to keep the containers running (mainly due to docker not wanting to auto-restart my failed containers, or being albe to figure out linked container dependencies).
Thankfully, Kubernetes is a lot nicer and since I was able to use GKE (Google Container Engine), I didn't have to do any mucking host side. Just write a few config files, hand them off to Kubernetes, and everything is running.
It was not all roses. Probably my biggest pain point was documentation. And not the typical "there is no documentation" or even "the documentation is old". In this case Kubernetes and GKE were documenting different versions of Kubernets, and their velocity is such, that there wasn't much the same between them. This was a large source of confusion for me until I figured out which docs to trust (use the source, Luke!). This was especially bad when I couldn't get simple things like their example configurations up and running.
The rest of my pain points are kinda picky. I include them to not discourage people from using Kubernetes, but rather to illustrate how Alpha the software is. They will eventually fix almost all these issues, but they just don't have time yet.
- Once you configure a cluster on GKE, it's that size forever. No adding machines, no dropping machines, no resizing machines.
- The Kubernetes master has to run on the same class of machine as all the worker nodes. Since it's basically just doing bookeeping, it doesn't need the same horsepower as your worker nodes. So you may be wasting a little bit of money here.
- Trying to configure the cluster to serve outside traffic is painful. For some hosting providers Kubernetes makes it easier, but there is still a lot of manual futzing and it took me quite a bit to get it working.
- YAML and JSON config file formats are either limited, or annoying, depending on which one you use.
So it's not all vinegar as well, there are a few super nice things they got right:
- Intra-cluster networking. Want to have a reverse proxy rewrite URLs to send requests to the proper backend? Easy. Just create a Kubernetes service that points to your pods, and then have your reverse proxy talk to the hostname for the service. Doing the same thing on raw docker requires evaluating environment variables to dynamically at container startup to modify your configuration.
- Did I mention no host side configuration? It took one command to bring up the cluster, then it was just a matter of getting the service/pod config files setup to deploy my containers.
- GKE comes with an automatic private Docker image repository, which saves you from having to configure it yourself.
- Kubernets makes it easy for you to distribute secrets (like auth keys) to your worker nodes in a safe an secure manner.
Kubernetes is going to be awesome. Right now, there are a bunch of rough corners and anything you do will probalby give you a splinter. But it still works, there's just a bit of a learning curve.