Using kops and AWS Bastion Hosts Correctly

You have correctly provisioned your AWS infrastructure using AWS Bastion Quickstart or with kops and want to connect to your private instances using the bastion hosts.

First some principles:

  • Terminate your bastion host after using it (set autoscaling to 0).
  • Do not use your bastion host as a store, especially not for anything security relevant
  • Specifically never copy any private key to the bastion host
  • Close down the security group to your specific IP
  • After using the bastion host, close down the security group completely

This may all sound paranoid, but even if some threats are theoretical at best, your security compliance group will come hunting for you if you don’t follow the steps.

To connect to any server in the private subnets, you can use ssh proxying. Add the following snippets to your ~/.ssh/config file

Host bastion
HostName <fqdn of your bastion-elb or bastion-host>
IdentityFile ~/.ssh/bastion_rsa # your bastion private key
User ec2-user

Host private-server
User ubuntu
HostName <private IP of your server in private subnet>
ProxyCommand ssh -q -W %h:%p bastion
IdentityFile ~/.ssh/privateserver_rsa # your bastion private key

Try connecting to your bastion host first

ssh bastion

Then after succeeding, log out and try

ssh private-server

If you are following the principles above, you will retrovision your bastion host (set autoscaling to 1) every time you need it you will run into an issue: ssh will not connect to the server since you kept the name but the server’s identity has changed. This is a security measure that prevents an intruder to redirect traffic to a new server while keeping the name. You can safely delete the old server’s identity from ssh by using:

ssh-keygen -R <bastion-elb>
Posted in Uncategorized | Comments Off on Using kops and AWS Bastion Hosts Correctly

Kubernetes: How to Find Out if ABAC or RBAC is Active

If you want to find out what authorization mode your cluster is running use:

kubectl cluster-info dump | grep authorization-mode
Posted in Uncategorized | Comments Off on Kubernetes: How to Find Out if ABAC or RBAC is Active

Kubernetes: kubectl run with ImagePullSecrets

If just want to quickly start a pod then you can use the following one-liner:

kubectl run <name> -it --restart=Never --image=<public image>

This will limit you to public repos. There is no equivalent to using imagePullSecrets in your yaml files, so this won’t work:

kubectl run <name> -it --restart=Never --image=<private image> --image-pull-secrets=<secret>

However you can use the following workaround (a lot to type):

kubectl run <name> -it --restart=Never --image=<private image> --overrides='{ "apiVersion": "v1", "spec": {"imagePullSecrets": [{"name": "<secret>"}]} }'
Posted in Uncategorized | Comments Off on Kubernetes: kubectl run with ImagePullSecrets

Terminal: Copying Long Text Lines Without Newline Breaks

If you copy long wrapped lines of shell code in Mac Terminal.app you want the pasted line to be broken down into multiple lines exactly where they were in the original script.

When you are using more, less or any other page formatting tool, your lines will be broken up at where they are wrapped on the screen.

I am using cat for displaying short scripts. It does not format lines and keeps newlines where the need to be.

Posted in Uncategorized | Comments Off on Terminal: Copying Long Text Lines Without Newline Breaks

Kubernetes: Using kubectl with 100s of Clusters

If you are working with many clusters, you need to configure multiple kubectl configs. You can do this using kubectl –kubeconfig= or by merging your config files.

If however you want to keep access to your config files separate, you might use a script that is exchanging your .kube/config symbolic link. I organize my clusters with numerically (based on a AWS account number), so I rename each cluster config file to say 0123 and put it in my .kube directory. I then use kubeconf 0123 to activate that config. Here is the kubeconf script (located in /usr/local/bin):

#!/bin/bash

set -e

if [ $# -ne 1 ]; then
    echo "Usage $0: <config name>"
    exit -1
fi

if [ ! -f ~/.kube/$1 ]; then
    echo "$0: config file ~/.kube/$1 not found"
    exit -2
fi

if [ -L ~/.kube/config ]; then
    rm ~/.kube/config
else
    BACKUP=~/.kube/config.$(date  +%F-%T).backup
    echo "Saving current config file to $BACKUP"
    mv ~/.kube/config $BACKUP
fi

echo "Configuring $1 as config"
ln -s ~/.kube/$1 ~/.kube/config
Posted in Uncategorized | Comments Off on Kubernetes: Using kubectl with 100s of Clusters

Using your Synology Diskstation as a SOHO git Server

Yes, you should use GitHub for your public and private repos (IMHO), but if you just need a small office solution with a few machines and / or users and already have a Diskstation (RAID config and backup recommended) why not install a git server locally.

Install the git server as described in the Synology guide. Create users in DSM that match your workstation users and are assigned to the group Users. In git server DSM manager assign those users to git. Then use the following steps to create repos on the server (guide assumes diskstation is your dns name of the Synology):

ssh admin@diskstation
cd /volume1/Repos # that folder needs to be created within volume manager

git init --bare --shared MyRepo.git

The option bare makes the repo a server representation of a git repository without a working directory and share makes the repo read/write enabled for all Users group members.

On your workstations you can than use the git ssh protocol to clone the created empty repos:

# if your local user and diskstation user match, clone as:
git clone ssh://diskstation/volume1/Repos/MyRepo.git

# if your local user and disk station user do not match, clone as:
git clone ssh://your-user@diskstation/volume1/Repos/MyRepo.git

If you want to push your existing local git repos to the newly created diskstation counterparts use:

cd MyRepo
git remote add origin ssh://diskstation/volume1/Repos/MyRepo.git
git push --set-upstream origin master

After the set-upstream option you can omit it for further pushes.

You can save yourself from typing passwords and increase security by using ssh keys (copy your public key to the diskstation).

Posted in DevOps | Tagged , , | Comments Off on Using your Synology Diskstation as a SOHO git Server

Kubernetes & Kops: Make Your Own Encrypted Debian AMI

Kops is the most popular solution to install Kubernetes on AWS in a highly-available way. Debian is the preferred Linux distro for kops, which is somewhat annoying if you see that CoreOS is the preferred container Linux.

Moreover, the Debian AMI for kops is custom build, not only the OS itself, but also the kernel. AMIs are marked public, so you can easily reuse them. As soon as you want to encrypt your images, you will need access to the underlying snapshot that is not public at the moment.

You may instantiate an EC2 instance, encrypt the snapshot or you can use kube-deploy to make yourself your own image the same way that kops does.

Clone the kube-deploy repo, configure your aws credential and run the following steps in the kube-deploy folder:

./hack/setup.sh # comment out the imagebuilder call in this file

# be careful not to overwrite your key at id_rsa!
ssh-keygen -t rsa -b 2048 -f $(pwd)/.ssh/id_rsa -C "${USER}@${HOSTNAME}" -N ""

aws ec2 import-key-pair  --key-name "id_rsa" --public-key-material file:///$(pwd)/.ssh/id_rsa.pub

docker run --rm -ti -v "$PWD":/usr/local/go/src/k8s.io/kube-deploy/imagebuilder -w /usr/local/go/src/k8s.io/kube-deploy/imagebuilder -v $HOME/.aws:/root/.aws:ro -v $HOME/.ssh:/root/.ssh:ro golang:1.8 bash

In the container that is starting run:

make

export AWS_REGION=$(aws configure get region)

imagebuilder --config=aws.yaml --replicate=false

Wait and proceed with AMIs with Encrypted Snapshots

Posted in DevOps | Tagged , , | Comments Off on Kubernetes & Kops: Make Your Own Encrypted Debian AMI

Kubernetes: Make Pods Run on Your Master Nodes

Some pods you might want to run on your master nodes, too. This may be because they are exporting master node metrics or even to save resources: say you want many instances of a specific pod. In that case (and given you are on Kubernetes >= 1.7) you can use tolerations to override NoSchedule taints. Add this to your pod’s spec:

tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master

You may even want your pods to run only on master nodes. Then add this node selector key to your spec:

nodeSelector:
  node-role.kubernetes.io/master: ""
Posted in DevOps | Tagged | Comments Off on Kubernetes: Make Pods Run on Your Master Nodes

Kubernetes: How to Make Your Node a Master

Under some circumstances Kubernetes is forgetting its master nodes (kops version < 1.8.0 and canal). If this happens, your masters will get scheduled full of pods. If this is the case the following command might help you: kubectl taint master1.compute.internal key=node-role.kubernetes.io/master:NoSchedule

This command seems to be buggy at times, complaining about wrong characters in the string. If that’s the case then try this nice patch:

kubectl patch node master1.compute.internal -p '{"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}]}}'

After you have correctly set the taints with above command, you can terminate workload pods and they will get restarted on non-masters.

BTW, if you do not take the measures above, not only will pods get scheduled on your masters which might get problematic, but in addition any differences between your master and node will catch you cold. E.g. you might open a security group on your nodes but not on your master. If a pod get scheduled to run on the master it will not able to communicate as desired.

Posted in DevOps | Tagged | Comments Off on Kubernetes: How to Make Your Node a Master

Kubernetes: How to Delete all Taints from a Node

kubectl patch node node1.compute.internal -p '{"spec":{"taints":[]}}'

Posted in DevOps | Tagged | Comments Off on Kubernetes: How to Delete all Taints from a Node