Deis Workflow + Kubernetes + AWS

Jan Riethmayer
Jan Riethmayer
Published in
5 min readJul 29, 2016

--

This summarizes the few important details I had to configure to get Deis Workflow ready on AWS.

The TL;DR

I use backing services wherever I can. I configure everything via environment variables, so I don’t have to change any files when downloading Deis Workflow. This post gives some overview how to properly configure resources within a VPC. It’s a checklist for the future me, too (why can’t I remember things?).

Motivation

I want to solve the deployment setup for a variety of technologies I use. Docker is the chosen abstraction layer. So what’s the best way to bring Docker containers to production?

I like the heroku workflow because of its simplicity. Is it possible to use this workflow on your private cloud? I believe we’re close, Deis Workflow looks very promising.

In this post I show you how to configure a Deis Workflow on AWS including all the details.

Setup the kubernetes cluster

First setup the environment variables

# file: .env# Use AWS credentials via specific profile
# aws configure — profile solarforschoolsexport AWS_DEFAULT_PROFILE=solarforschools
# Kubernetis Cluster configuration
export INSTANCE_PREFIX=k8s-production
export KUBERNETES_PROVIDER=aws
export KUBE_AWS_ZONE=eu-west-1a
export KUBE_ENABLE_INSECURE_REGISTRY=true
export MASTER_SIZE=t2.medium
export NODE_ROOT_DISK_SIZE=100
export NODE_SIZE=t2.large
export NUM_NODES=2

and boot up the cluster

source .env && ./cluster/kube-up.sh

You’ll have a kubernetes cluster running in few minutes.

Setup the backing services

Let’s start with the database

Database setup via RDS Postgres

First you have to set up a subnets for the Kubernetes VPC. This is required so your nodes can access the database from all Availability Zones.

Kubernetes VPC subnets in all Availability Zones

Then create a DB Subnet for the Kubernetes VPC. Otherwise you’re not able to locate the RDS Database within the same VPC.

Example configuration for a DB Subnet on AWS.

Now you’re able to launch a DB instance. I’m using the free tier database.

Setup RDS database in the Kubernetes Cluster

Now it’s time to configure your database as a backing service

# file: .env (continued)
# OFF-Cluster RDS configuration
export DATABASE_HOST=deis-db.xxx.eu-west-1.rds.amazonaws.com
export DATABASE_LOCATION=off-cluster
export DATABASE_NAME=deis
export DATABASE_PASSWORD=xxxyyyzzzz
export DATABASE_PORT=5432
export DATABASE_STORAGE=s3
export DATABASE_USERNAME=deis

Here we see the first reference of S3. Let’s setup S3 as our object-storage

Setup S3 as object-storage

In order to have access to S3 you need to setup security credentials via IAM. Visit the IAM users section on AWS and create a user (I called mine deis).

Once the user is created, you have to grant the user access to S3 via the Permissions tab.

Permission settings for S3 IAM role

Now let’s setup the environment variables

# file: .env (continued)
# S3 object-storage configuration
export AWS_BUILDER_BUCKET=solarforschools-builder
export AWS_DATABASE_BUCKET=solarforschools-database
export AWS_REGISTRY_BUCKET=solarforschools-registry
export S3_REGION=eu-west-1
export STORAGE_TYPE=s3
export AWS_ACCESS_KEY=KEY_FROM_YOUR_DEIS_USER
export AWS_SECRET_KEY=SECRET_FROM_YOUR_DEIS_USER

Don’t forget to create the S3 buckets

Create S3 Buckets matching your environment configuration

The last missing piece is setting up Redis.

Setup Redis via Elasticache

First we setup a Cache Subnet Group as we did with the database.

Cache Subnet Group settings for Redis

Make sure to add the redis cluster (in my case a single t2.micro instance) to the correct Cache Subnet Group.

Elasticache Subnet Group configuration

And then update your .env

# file: .env (continued)
# OFF-Cluster Redis configuration
export LOGGER_REDIS_LOCATION=off-cluster
export LOGGER_REDIS_DB=0
export LOGGER_REDIS_HOST=deis-redis.xxx.yyy.euw1.cache.amazonaws.com
export LOGGER_REDIS_PORT=6379
export LOGGER_REDIS_PASSWORD=””

In order for your backing services to be reachable, you’ve got to open the ports for them in your security group settings

Update your security group settings

Choose the default group for your Kubernetes cluster. There’s another default one, which is created for the EC2 account.

How to identify the Kubernetes default security group

On the Inbound tab setup custom TCP rules like this (Redis + Postgres)

Redis and Postgres inbound rules

Now we’ve setup everything to install deis.

Refresh your environment variables from your console before you install the deis workflow

$ source .env # with all the setup we've done
$ helmc fetch deis/workflow-v2.2.0
---> Fetched chart into workspace ...
---> Done
$ helmc generate -x manifests workflow-v2.2.0 # respects your .env
---> Ran 16 generators.
$ helmc install workflow-v2.2.0
... (lots of output)
---> Done
$ kubectl --namespace=deis get pods

After this you should have your Deis.io cluster running on AWS powered by Kubernetes. I’m very happy I’ve now successfully separated the backing services from the actual kubernetes cluster.

Things you should do now

  • read again this guide
  • configure your AWS loadbalancer
  • configure your DNS

Discussion

Whether the particular technologies I use here are the right ones, I honestly don’t know. With lots of things abstracted away if something blows up it’s quite hard understand how to debug it.

I’m quite afraid I’m abstracting away too much stuff. I love ansible on the one hand but it’s quite a task to deal with all the details of monitoring, log management, failover configurations etc.

Most of the problems I had setting up this Deis Workflow with Kubernetes was the lack of good error messages. I really hope this is going to improve as it would save tons of time (e.g. wrong credentials or typos in environment variables are hard to discover).

Hope this was helpful to you, if so please share / like the post. If you got questions, just leave a comment.

--

--

Jan Riethmayer
Jan Riethmayer

People say they join companies because of what I stand for. My expertise is building businesses as CTO, from inception to scale. radical-cto.com