This guide is focused on a small, single-instance server. Initialisation logic in particular is probably not safe against horizontal scaling. Beware! 🙂
This guide will go through:
- Setting up your development environment for Kubernetes deployment.
- Dockerising your Rails application
- Setting up a container registry for deploying images to.
- Setting up Kubernetes for your Rails project, with a focus on supporting multiple environments deployed in production (e.g. a staging and production instance).
- Setting up Postgres instances for each of your environments.
- Setting up a single Nginx ingress for your cluster which allows ingress to all your environments (which are otherwise isolated).
- Setting up TLS with cert-manager, as well as a current caveat with the Nginx ingress and how you can avoid it.
Running Ruby on Rails in production can be one of the most hair-pulling steps to getting your new application up and running, especially in contrast to how elegant most of the process of writing a Rails application is.
One of Kubernetes’ biggest benefits is how it allows you to scale applications and leverage the power of the cloud, but similarly nice is how it lets you write declarative (as opposed to imperative) configuration for your services, rather than managing a VPS yourself, with all the trouble that entails. You can free yourself from manual iptables / ufw management, not worry as much about things like what starts your service & restarts it if it crashes, as well as developing skills that can come in useful in modern cloud-based businesses.
All that said, it presents its own difficulties. I ran into quite a few hold-ups, ranging from certificate issuance to serving static files from Rails through Nginx.
Setup
First, you’ll want to make sure you have a local Kubernetes development environment with Kustomize installed. If you’re on macOS that’s as easy as running:
brew install kubectl # Kubernetes' CLI
brew install kustomize # Fantastic templating engine
brew install doctl # DigitalOcean CLI
As well as installing Docker which you can currently do at https://docs.docker.com/engine/install/.
You’ll then want to set up your Kubernetes cluster in DigitalOcean. I went with a simple two-$10-node setup. Keep in mind you’ll also need a load balancer (currently $10/month), a container registry, as well as a bunch of persistent volumes. The latter aren’t hugely expensive, but will likely add up to a couple of dollars a month.
Dockerising Rails
If you don’t know much about Docker, it’s worth having a quick read up on it. But in short, Docker allows you to generate portable images of your application with batteries included, which can then be pushed to a container registry, which allows you to run them inside Kubernetes pods.
Docker containers are specified by a Dockerfile. Most commands generate a new layer, and layers are composed together to create the final image. Docker has intelligent caching, which means it’s best to put things that don’t change much (like system library installs for Nokigiri) first, and things that change often further down (such as your applications’ files).
Here’s my Dockerfile
, which may help you dockerise your application. Full disclaimer – there may be better ways to do it – but I’ve found this to work quite well. You’ll also need to substitute your application’s ‘name’ where I’ve written <APPLICATION NAME>
in a format that works as a folder name. If you choose to use this, you’ll need to save it as a file called Dockerfile
in the same folder as your rails root.
FROM ruby:2.7.0
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt-get update -qq && apt-get install -y build-essential nodejs yarn
# Postgres
RUN apt-get install -y libpq-dev
# Nokigiri
RUN apt-get install -y libxml2-dev libxslt1-dev
# Capybara-webkit
RUN apt-get install -y libqt4-dev xvfb
ENV APP_HOME /<APPLICATION NAME>
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile* $APP_HOME/
RUN bundle config build.nokogiri --use-system-libraries
RUN bundle install
ADD . $APP_HOME
# Dummy value to satisfy Rails
ARG SECRET_KEY_BASE=DUMMY
# You can still run non-production environments from
# this Dockerfile, but this makes sure assets are compiled
# targeting production.
ARG RAILS_ENV=production
RUN yarn install --check-files
RUN bundle exec rake assets:precompile RAILS_ENV=production
Notice that we set SECRET_KEY_BASE=DUMMY
. We will be deploying our Rails master key as a Kubernetes secret later, but sadly rake assets:precompile
currently expects it to be around due to a dependency within that command, even though it doesn’t use it for anything. As a result, setting it to a dummy variable allows everything to run smoothly.
One more thing – notice that we specifically add Gemfile*
(i.e. both Gemfile
and Gemfile.lock
) separately to everything else. This is because our application as a whole probably changes a lot more often than our Gemfile does. By ordering our Dockerfile like this, Docker can cache the layers involved in installing and setting up gems and avoid doing it every time something in your application changes.
Once your Dockerfile is set up, running docker build .
should work. If so, you’re ready to continue (although you may want to actually run it to test it works correctly and everything is set up right, that’s out of the scope of this guide).
Setting up a container registry
Just like source code is best pushed to a source control repository, containers are best served col–.. er, I mean, in a container registry. This allows Kubernetes to pull them down and centralises your application’s runnable images.
DigitalOcean has a private container registry system in beta right now. You can set one up under Images -> Container Registry
. Once that’s done, you’ll need to run doctl registry login
in a terminal, which will set your Docker CLI up to be able to push to your container registry.
Once done, try it out. Your previous docker build (or just run docker build .
now if you haven’t already) should have given you a hash at the end, for example it might look like:
Successfully built b952cefba0ac
You can tag the hash there to tag and push an image, as follows:
REGISTRY_NAME="YOUR_REGISTRY_NAME_HERE"
IMAGE="YOUR_APPLICATION_NAME_HERE"
DOCKER_IMAGE_ID="YOUR_HASH_HERE"
VERSION="0.0.0"
DOCKER_REGISTRY="registry.digitalocean.com/${REGISTRY_NAME}"
IMAGE="${DOCKER_REGISTRY}/${IMAGE}:${VERSION}"
docker tag "$DOCKER_IMAGE_ID" "$IMAGE"
docker push "$IMAGE"
Setting up your Kubernetes cluster
You can set up your Kubernetes cluster using Terraform, but for this guide I suggest doing it in the UI. Note that currently during the early access, DigitalOcean seems to limit container registries to Amsterdam (AMS3). If so, it’s probably worth colocating your Kubernetes cluster in the same region if you don’t have a good reason not to. Use the latest Kubernetes version, and customise your Kubernetes cluster however you like. Personally, I went with two small ($10) nodes.
Then you’ll want to set up your kubectl
CLI to be able to access the cluster. That’s pretty easy:
doctl kubernetes cluster kubeconfig save <CLUSTER NAME>
Deploying your Rails application
Now we’ll deploy our Rails application. While setting up, you’ll probably want to hard-core your application controller to show a maintenance page and perhaps even use a subdomain for the time being.
First, you’ll need to set up your Kubernetes configuration. Make a folder structure as follows (with empty files for now):
k8s/
certificate_issuer.yaml
base/
database.yaml
application.yaml
kustomization.yaml
overlays/
prod/
application.yaml
ingress.yaml
kustomization.yaml
namespace.yaml
Recall your application name you used earlier for your Docker folder name. You needn’t use the same name for your Kubernetes labels, but it’s probably best to be consistent so I’ll be assuming you are doing so.
Within the base folder, you set up the basics shared between all of your deployed environments, so we’ll start with application.yaml
:
apiVersion: v1
kind: Service
metadata:
name: <APPLICATION_NAME>
spec:
type: ClusterIP
ports:
- name: rails
port: 80
targetPort: 8080
- name: assets
port: 81
targetPort: 80
selector:
app: <APPLICATION_NAME>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: <APPLICATION_NAME>
spec:
replicas: 1
selector:
matchLabels:
app: <APPLICATION_NAME>
template:
metadata:
labels:
app: <APPLICATION_NAME>
spec:
volumes:
- name: public-assets
emptyDir: {}
initContainers:
- name: init-static-files
image: registry.digitalocean.com/<REGISTRY>/<IMAGE>
volumeMounts:
- name: public-assets
mountPath: /public
- name: db-migrate
image: registry.digitalocean.com/<REGISTRY>/<IMAGE>
command: ["bin/rails"]
args: ["db:migrate"]
- name: db-seed
image: registry.digitalocean.com/<REGISTRY>/<IMAGE>
command: ["bin/rails"]
args: ["db:seed"]
containers:
- name: <APPLICATION_NAME>
image: registry.digitalocean.com/<REGISTRY>/<IMAGE>
command: ["bin/rails"]
args: ["server", "--environment", "production", "--port", "8080"]
ports:
- containerPort: 8080
volumeMounts:
- name: public-assets
mountPath: /<APPLICATION_NAME>/public
subPath: public
env:
- name: RAILS_MASTER_KEY
valueFrom:
secretKeyRef:
name: rails-master-key
key: key
- name: DATABASE_USERNAME
value: postgres
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: rails-db-key
key: key
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: public-assets
mountPath: /usr/share/nginx/html
subPath: public
Replace <APPLICATION_NAME> with your application name throughout, and <REGISTRY> and <IMAGE> with your DigitalOcean registry name and image name throughout. Do not include a version on your images – Kustomize will handle that for us later on.
There’s a lot to unpack here. We’ve included both a Service and a Deployment in the same file, although you can split it into two files if you so wish. The triple hyphen in YAML separating the two definitions is essentially a “file break”.
First off, the deployment. We’re running our Rails server on port 8080, and an Nginx server on port 80. These are pod-specific ports and won’t be exposed to the internet, don’t worry. They’ll be used in our networking within the cluster.
The most confusing thing going on here is how we’re managing public asset serving. There’s certainly better ways to do it than this than what I’ve done here, like pushing your static assets to e.g. a CDN, S3 bucket, DigitalOcean space, etc. however this is a fairly simple approach that works pretty well. What we do is make use of the fact that our built image has all our public assets sitting nicely in the public/
folder. We create a volume called public-assets
, which is mounted to both our Nginx container (which actually serves the static assets) and our application container. We abuse Kubernetes’ support for init containers, which sequentially run prior to your application’s container running, and make a container that runs your application’s image and copies all the public files onto the public volume mount.
This trick actually works slightly better in docker-compose instead of Kubernetes, as you can mount a shared volume onto an existing folder to automatically include the files in that folder. Sadly, it doesn’t appear to be possible in Kubernetes, but this gets around that limitation, albeit not incredibly elegantly.
We also run two other init containers, one to migrate our database and another to seed it. I’m assuming your db:seed operation is coded to be idempotent, that is to say, running it multiple times has no effect. This is generally good practice because it means you can seed new data (such as when you add a new table in a migration) when it’s added. If your seeding is not idempotent, you will want to remove the relevant init container and seed manually the same way we do a database setup below.
Note that we do not set up arguments to our Rails command to set the environment and port; don’t worry, that will be in the environment-specific configuration to come.
Next we set up database.yaml
:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: do-block-storage
---
apiVersion: v1
kind: Service
metadata:
name: db
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:12.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: rails-db-key
key: key
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
subPath: postgres
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: pvc
This sets up a persistent volume claim which will automatically set up a 5Gi DigitalOcean volume for you, and attaches it to the Postgres database which it also sets up. Nice and simple.
This is a good time to note that this guide does not cover exporting metrics and logs – you won’t get any warning when your database is getting full, or or it’s erroring. That’s something you’ll want to set up afterwards as part of productionising.
We refer to a cluster-issuer in this, which we’ll set up soon, but first let’s fill in the kustomization.yaml
within base/
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- application.yaml
- database.yaml
We’re getting close, now, but there’s still a few more pieces to slide into place. Next, we set up a ClusterIssuer, which is one of the resources provided by cert-manager (which we’ll install into our cluster shortly) inside certificate_issuer.yaml
:
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# Email address used for ACME registration
email: <YOUR EMAIL>
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
# Name of a secret used to store the ACME account private key
name: tls-key
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
Make sure you replace your email. Cert-manager automatically manages our TLS certificate renewal for us. Our ingress we wrote earlier references the cluster issuer above in annotations, which will automatically cause it to issue certificates for them.
Notice that this file is not contained within the base folder. This is because you only need a single ClusterIssuer in a cluster, and it will work across all Kubernetes namespaces. If you prefer to have an issuer per environment, you can instead move it in, add it to the Kustomization file, and change it from a ClusterIssuer to an Issuer (the rest of the file can remain the same).
Next, we set up our individual environments.
First, ingress.yaml
:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- <DOMAIN NAME>
secretName: tls-key
rules:
- host: <DOMAIN NAME>
http:
paths:
- path: /assets
backend:
serviceName: <APPLICATION NAME>
servicePort: 81
- path: /packs
backend:
serviceName: <APPLICATION NAME>
servicePort: 81
- path: /
backend:
serviceName: <APPLICATION NAME>
servicePort: 80
Notice that our ingress rules set up the public folders to forward to port 81 (the Nginx file server) on our application service, and everything else to our Rails backend on port 80.
Next, the environment-specific application.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: <APPLICATION_NAME>
spec:
template:
spec:
containers:
- name: <APPLICATION_NAME>
args: ["server", "--environment", "production", "--port", "8080"]
Kustomize will merge this with our top-level base deployment; all we’re doing here is adding the argument list to set the environment. You may prefer to do this through an environment variable instead.
Next, namespace.yaml – which is pretty simple, it just sets the namespace up for this environment of our application:
apiVersion: v1
kind: Namespace
metadata:
name: <APPLICATION_SHORT_NAME>-prod
You’ll want to switch out -prod
accordingly. You’ll probably want APPLICATION_SHORT_NAME
to be something quick and easy to type, like initials of your website.
And, finally, kustomization.yaml
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: <APPLICATION_SHORT_NAME>-prod
resources:
- namespace.yaml
- ingress.yaml
- ../../base
patchesStrategicMerge:
- application.yaml
Make sure your namespace matches what you previously created.
Now we’re done setting up our configuration! Onto preparing our cluster…
Preparing your cluster for deployment
There’s two things you’ll need set up in your cluster: an Nginx ingress controller, and cert-manager. These commands should get them both set up nicely:
helm install ingress-nginx ingress-nginx/ingress-nginx \
--set controller.publishService.enabled=true \
--set-string controller.config.use-forward-headers=true,controller.config.compute-full-forward-for=true,controller.config.use-proxy-protocol=true \
--set "'controller.service.annotations.service.beta.kubernetes.io.do-loadbalancer-enable-proxy-protocol'=true"
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--version v1.0.1
This will set up a LoadBalancer on DigitalOcean which you will be automatically billed for. There is no good way around this that gives you a reliable static IP that I am aware of, even if you don’t think you need the full power of a load balancer. That said, it’s reasonably affordable – currently $10/month – and should allow you to scale quite a bit before causing any problems.
Go to your LoadBalancer in DigitalOcean, and in the settings make turn on PROXY protocol. We’ve set up the Nginx ingress above so that it will use PROXY protocol, and what this means is your Rails app will be able to get the IP of your users correctly. Otherwise, all of your connections will appear to be coming from your load balancer… Not ideal! And it might make for some very interesting demographic conclusions: all of our users seem to live in the same house in Amsterdam!
Finally, you also need to deploy your certificate.yaml file which is independent of versions, unless you decided not to user a ClusterIssuer. You can do that as follows, from your rails root:
kubectl apply -f k8s/certificate.yaml
The first deploy
Now you’re ready to deploy your application for the first time.
First, you’ll want to set your image version. We discussed earlier how to tag and push an image, and I gave commands for pushing version 0.0.0. If you didn’t do that, go back and do it now. Then, you can run the following commands within the overlays/prod
directory – make sure you fill in the three variables at the top first:
REGISTRY="<YOUR REGISTRY NAME>"
IMAGE="<YOUR IMAGE NAME>"
VERSION="<YOUR VERSION>"
DOCKER_IMAGE="registry.digitalocean.com/${REGISTRY}/${IMAGE}"
VERSIONED_DOCKER_IMAGE="${DOCKER_IMAGE}:${VERSION}"
kustomize edit set image "${DOCKER_IMAGE}=${VERSIONED_DOCKER_IMAGE}"
The interesting thing here is kustomize edit set image
. What it does is add some stuff to your kustomize.yaml
so that it will set the image version to 0.0.0
everywhere your image is referenced, which makes it super easy to change version later – just this one kustomize command. You can also add or configure the relevant kustomize configuration by hand, but this command is super useful for building more reliable automation flows.
Once you’ve done that to set it to version 0.0.0
, or whatever version you’ve chosen to deploy first, you’re finally ready to deploy your application to Kubernetes.
Run this from your Rails root (or anywhere else and adjust the path accordingly):
kustomize build k8s/overlays/prod/ | kubectl apply -f -
And, boom! Your application is deployed. But you won’t be able to access it right now. First things first, run kubectl get services
to find your load balancer’s external IP. If you visit that IP, you should get an Nginx error: it doesn’t know what to do with you, because all it knows to do is route your domain name. So we’ll set that up next. You may have noticed your application’s service is not visible in the results of that command. That’s because it’s deployed to a separate namespace, don’t worry.
Take that external IP, and configure your DNS’ A record accordingly to point to it. It might take a little while to propagate. If you use your ISP’s default DNS (if you don’t know what that means, you probably are), then consider setting up Cloudflare or Google’s. They’re free, easy to set up, and will likely make your browsing faster and more reliable, as well as stopping your ISP DNS jacking you. In this case, it means you should see your domain update instantly!
There’s still some things left to do: your database isn’t set up yet, so your initialisation containers will be failing to run migrate and seed, and your TLS certificate won’t be working yet, but more on that soon…
Secret setup
You need to set up two secrets: a database secret, and your Rails master key.
The files above assume these are stored in rails-db-key
and rails-master-key
. You need to push these to the right namespace, which I recommended calling <APPLICATION_SHORT_NAME>-prod
, but you may have called it something else. Run the commands as follows, using a random password for your DB key:
kubectl create secret generic rails-db-key --namespace <NAMESPACE> --from-literal="key=<RANDOM PASSWORD>"
kubectl create secret generic rails-master-key --namespace <NAMESPACE> --from-literal="key=<YOUR RAILS MASTER KEY>"
And your database needs a first-time setup. That’s an easy fix:
kubectl run -it --rm db-setup --namespace <NAMESPACE> --image=<YOUR RAILS IMAGE PATH WITH VERSION> -- bash
This will give you a bash terminal into your rails app. Just run the usual:
RAILS_ENV=production bin/rails db:setup
Quit the container with exit
, and it will automatically get recycled (since we passed the --rm
flag). Now your application service should automatically boot up, connect to your database fine, and be working… In HTTP at least…
About those certificate errors…
Now, cert-manager automatically sets up TLS certificates, however it won’t be working right now. For reasons which seem to be being worked on by the Kubernetes folks in collaboration with the various cloud providers, cert-manager cannot do a self-check on ACME challenges while the PROXY protocol is in place, which I gather is because the network routing doesn’t end up leaving the cluster, which means it doesn’t go through the load balancer, and doesn’t get the right headers set up and then gets rejected by the ingress (I may be misunderstanding, but I think this is the gist of it…).
It’s a pretty easy fix, but it’s potentially disruptive: disable the PROXY protocol, delete the certificate to prompt cert-manager to try again (it will do so in due course, but it’s faster to just force it to), and then re-enable once TLS is working. This means for a small period of time once every 90 days (the default renewal length) you will need to either have scheduled downtime or accept the loss of client IP address resolution in your Rails app.
If you truly don’t care about client IP resolution, you can avoid using the PROXY protocol altogether, but I don’t recommend this: IP addresses can be very useful for all sorts of things, not least of all post-incident security analysis.
Anyway, you can do that as follows:
# Or whatever you named your namespace
NAMESPACE="<YOUR APPLICATION>-prod"
echo "Disabling proxy protocol, must also be disabled on DigitalOcean load balancer"
helm upgrade ingress-nginx ingress-nginx/ingress-nginx --set controller.publishService.enabled=true --set-string controller.config.use-forward-headers=true,controller.config.compute-full-forward-for=true
echo "Deleting existing certificate"
kubectl delete certificate --namespace "${NAMESPACE}"
echo "Sleeping while certificates refresh..."
sleep 15
echo "Re-enabling proxy protocol"
helm upgrade ingress-nginx ingress-nginx/ingress-nginx --set controller.publishService.enabled=true --set-string controller.config.use-forward-headers=true,controller.config.compute-full-forward-for=true,controller.config.use-proxy-protocol=true --set "'controller.service.annotations.service.beta.kubernetes.io.do-loadbalancer-enable-proxy-protocol'=true"
Make sure you disable PROXY protocol on your DigitalOcean load balancer settings (on the DigitalOcean website) beforehand and re-enable it afterwards. The sleep 15
is likely to be far more time than is actually necessary; you can refresh your website in HTTPS and run the final helm command and adjust the load balancer to re-enable PROXY protocol after the certificate has been issued if you like.
Notes
You can add new environments super simply – just copy the overlays/prod
folder to, for example, overlays/staging
, and then accordingly adjust the files within it to fix the namespace (both in namespace.yaml
and in kustomize.yaml
), the rails environment flag, and the hostnames in the ingress settings. You’ll need to do everything from secret setup onwards again for that new environment to set it up, but it should mostly be familiar to you.
Note that because of how database migration and seeding is done in init containers, it probably isn’t thread safe, so you can’t just up the replica count as you’d normally want to with Kubernetes. You’ll need to configure something more complicated to be safe against this, sadly. You could have your deployment script automatically shell into the cluster and run db:migrate
whenever you run a deploy, for example, or have some fancy CI/CD solution doing it all for you.
End
I hope this guide has been of use to someone – it took a lot of trial and error to get this working properly and I thought it might be valuable to share. However, I’m very open to feedback to improve this! Please feel free to drop comments with any problems you ran into, constructive criticism, or even just a hello if it helped :).