Mac: Make Your External Drive Read-Only

I have a ton of music software content (samples, sounds etc) that makes it impossible to take with me on my MacBook Pro directly. Therefore, I bought an external USB-C Samsung T5 mini SSD that is super-fast and has a small form factor. I then moved my static sound library (Alchemy, Apple Loops, EXS samples, Native Instruments Komplete etc) to that disk and created symbolic links to that SSD.

Now, when I am in the hotel, I just connect that SSD to get my full library on the go. Very convenient and it doesn’t interfere with my business stuff. However, there is an issue: The drive can get disconnected at times (poor USB-C connector on Samsung cable) and sometimes when moving around with the laptop I just want to unplug that drive quickly without that annoying message from MacOS, that I wrongly disconnected that drive. And they are right, OSes nowadays continuously write stuff to drives and disconnecting without first ejecting the drive can result in corrupted files.

The solution for my use case is to make the drive read-only. Since I only use it for static files, this is just perfect. Now I can unplug the drive whenever I want accidentally or not, MacOS stays quiet. In addition, I cannot by chance overwrite files or move stuff around. And if I want to write to the disk, it is just a quick edit (change the ro below to rw).

How to do it? With the external drive inserted, run

diskutil info -all

and pick Type, Volume UUID and Volume Name of your drive. In my case this is:

   Volume Name:              T5
   Volume UUID:              20B59A9D-5A8B-4B57-B4F7-03123445B
   Type (Bundle):            apfs

Use vifs to edit your /etc/fstab (the unix way to mount file systems) and insert these two consecutive lines (per volume):

UUID=20B59A9D-5A8B-4B57-B4F7-03123445B none apfs ro
LABEL=T5 none apfs ro

The ro stands for read-only. Now eject the drive once more and insert it. If you run diskutil again and you can check to set that the drive is no writable. You’re good to go.

Ping me on Twitter is you need details. @gekart

Posted in Music Production | Tagged , , , | Comments Off on Mac: Make Your External Drive Read-Only

Silencing your iMac: Decrease Minimum Fan Speed to 1000rpm

I finally replaced my hard drive in my 2015 iMac 5k with a Samsung 860 SSD. Not an easy decision since you need to cut the Mac open. However, apart from the HD being way to slow, it is also very noisy: there is a hiss type of sound and rumble from the rotational mass. Unbearable in a quiet environment.

After installing the SSD, the speed and noise improvement are tremendous. But then, you hear the fan “idling” at 1200rpm resulting in quite a strong airflow and distracting noise no matter what your system load is. It seems that on my low spec iMac, the regular (Apple auto-controlled) rpms never get regulated above those 1200rpm, even when running Unreal 3D benchmarks. So the default 1200rpm can be seen as a static value: it doesn’t go above or below that no matter what you do to the system. I downloaded MacsĀ Fan Control, but you can’t reduce rpm below the minimum 1200 (but you can go up to 2700). Luckily I found a way around that and reduced the minimum to 1000rpm, which is considerable less noisy! It makes all the difference to me.

What you need is a utility that is included in the smcFanControl tool. Its called smc and is also available separately on GitHub. From here on you must read the following first:

Warning
-------
This tool will allow you to write values to the SMC 
which could irreversibly damage your computer.  
Manipulating the fans could cause overheating and permanent damage.

USE THIS PROGRAM AT YOUR OWN RISK!

You can and will destroy stuff and I won’t take any responsibility, but here is what worked for me (assuming that you copied smcFanControl to Applications directory):

In the Terminal app type:

/Applications/smcFanControl.app/Contents/Resources/smc -k F0Mn -w 0fa0

This writes the for 1000 (0fa0) to the smc key F0Mn (minimum speed of fan 0). If you need further explanation, the GitHub repo provides a detailed readme.

After that, use Macs Fan Control to reduce the speed manually to 1000rpm. Reading out the smc info now looks like this:

/Applications/smcFanControl.app/Contents/Resources/smc -f
Total fans in system: 1

Fan #0:
    Fan ID       : Main 
    Actual speed : 999
    Minimum speed: 1000
    Maximum speed: 2700
    Safe speed   : 0
    Target speed : 1000
    Mode         : forced

This results in almost complete silence in my environment. Yet, there is still considerable airflow and everything looks and feels great. What a relief!

Update 1:

Lowering the minimum speed below 1000rpm does not yield the desired results. It seems that the fan and its control logic can’t go well below that. By setting the minimum speed to 800rpm, the lowest ever achieved rpm that I got is around 960rpm.

Update 2:

After rebooting, the minimum fan speed got reset (by the system supposedly) to the original 1200rpm. However, after starting Macs Fan Control, it writes my desired target speed to smc so that still works! Great! Here is how that looks like:

/Applications/smcFanControl.app/Contents/Resources/smc -f
Total fans in system: 1

Fan #0:
    Fan ID       : Main 
    Actual speed : 959
    Minimum speed: 1200
    Maximum speed: 2700
    Safe speed   : 0
    Target speed : 800
    Mode         : forced
Posted in Operating systems | Tagged , , , | Comments Off on Silencing your iMac: Decrease Minimum Fan Speed to 1000rpm

Changing Encryption of PersistentVolumes in Kubernetes

Docker containers are treated as ephemeral, specifically when they are managed in a Kubernetes cluster. Starting and restarting in the cluster is done automatically and works as a breeze. Things get more complicated as soon as you decide to keep state in your cluster. This is typically done by attaching volumes into a cluster (writing to a host mounted volume is an absolute no-no, except for experimental purposes).

On AWS you would typically use dynamic PersistantVolumes bound to EBS or EFS. Once you attach a volume its AWS volume ID gets tracked by Kubernetes.

In the output below you can see that the volume ID is aws://eu-central-1c/vol-a559b271b5fca7dfa:

$ kubectl get pv pvc-a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubernetes.io/createdby: aws-ebs-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
  creationTimestamp: 2018-02-06T15:17:17Z
  labels:
    failure-domain.beta.kubernetes.io/region: eu-central-1
    failure-domain.beta.kubernetes.io/zone: eu-central-1c
  name: pvc-a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a
  resourceVersion: "125511"
  selfLink: /api/v1/persistentvolumes/pvc-a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a
  uid: a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a
spec:
  accessModes:
  - ReadWriteOnce
  awsElasticBlockStore:
    fsType: ext4
    volumeID: aws://eu-central-1c/vol-a559b271b5fca7dfa
  capacity:
    storage: 20Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: mysql-pv-claim
    namespace: default
    resourceVersion: "22108"
    uid: a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a
  persistentVolumeReclaimPolicy: Delete
  storageClassName: gp2
status:
  phase: Bound

Once that volume has been created you loose the capability to control its encryption status or the ability to change its KMS key.

Why would you want to do such a thing in the first place? Say for example, you created a volume unencrypted, written a lot of application state and now want to encrypt the volume. Or you have encrypted the volumes but now decide to use KMS or CMK and therefore need to change the KMS key of that volume. An simple way would be to create a second volume in Kubernetes and then copy the files over. However, when you do not have access to the applications that are using these volumes, this might not be that easy, specifically if you have hundreds of volumes.

In a pure AWS world, a process exists for this:

  • detach the volume (by stopping the instance),
  • create a snapshot,
  • copy snapshot using the correct encryption method,
  • create a volume out of snapshot in the correct parameters (specifically AZ) and
  • attach it to the instance.

For volumes that are attached to nodes (not masters), you can stop the instances. But before you do that, suspend the corresponding ASG (Auto Scaling Group) launch event. Otherwise the ASG will start a new instance and probably mount the EBS volume there.

If you terminate the instances (as you would probably should in an immutable world), you need to make sure that the ASG will start the new ones in the same AZ. I had a case with 2 nodes in 3 AZs and sure enough the round robin ASG would start the new instance in the previously unused AZ. The problem is that EBS volumes are AZ-bound and in that case the volume couldn’t be attached to that new instance since they were in 2 different AZs.

However, this will change the volume ID that is tracked by Kubernetes. Here our beloved kubectl patch will come handy:

kubectl patch pv pvc-a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a -p '{"spec":{"awsElasticBlockStore":{"volumeID":"aws://eu
-central-1c/vol-a559b271b5fca7dfa"}}}'

You will also need to copy all the tags to the new volume and modify the tags on the old volumes.

In summary, on Kubernetes I have successfully tried the following procedure:

  • gracefully shutdown the persistent application,
  • pick a volume to exchange and its corresponding node (pv/volumeID and no/volumesAttached,
  • suspend Launch on node ASG,
  • terminate the instance (volume becomes available),
  • check application pods are pending,
  • create a snapshot,
  • copy snapshot using the correct encryption method,
  • create a volume out of snapshot in the correct parameters (specifically AZ, size),
  • copy all tags from original volume (and modify tags on original volume),
  • kubectl patch the PVs volumeID,
  • resume Launch on node ASG,
  • wait for a new instance (check it is in the same AZ as the old one),
  • check pods are running and application is working fine.

Only use procedures like this in your experimental environments. I take no responsibility for any damage!

Posted in Uncategorized | Comments Off on Changing Encryption of PersistentVolumes in Kubernetes

List Kubernetes Master Nodes

You can use the command below to show all nodes that are acting as master on your cluster. This is particularly useful when dealing with kops and some versions of canal networking that (accidentally) manipulate the status of the nodes.

kubectl get nodes -o json | jq -r '.items[] | select(.spec.taints[]?.key=="node-role.kubernetes.io/master") | .metadata.labels."kubernetes.io/hostname"'

If you have a setup where masters are clearly separated from nodes, no user workloads should be scheduled on that masters (because for example, the security groups will prevent application communication between masters and nodes). In this case, you may want to make sure to check that the NoSchedule effect is applied as a taint:

kubectl get nodes -o json | jq -r '.items[] | select(.spec.taints[]?.key=="node-role.kubernetes.io/master" and .spec.taints[]?.effect=="NoSchedule") | .metadata.labels."kubernetes.io/hostname"'
Posted in DevOps | Tagged , | Comments Off on List Kubernetes Master Nodes

Using kops and AWS Bastion Hosts Correctly

You have correctly provisioned your AWS infrastructure using AWS Bastion Quickstart or with kops and want to connect to your private instances using the bastion hosts.

First some principles:

  • Terminate your bastion host after using it (set autoscaling to 0).
  • Do not use your bastion host as a store, especially not for anything security relevant
  • Specifically never copy any private key to the bastion host
  • Close down the security group to your specific IP
  • After using the bastion host, close down the security group completely

This may all sound paranoid, but even if some threats are theoretical at best, your security compliance group will come hunting for you if you don’t follow the steps.

To connect to any server in the private subnets, you can use ssh proxying. Add the following snippets to your ~/.ssh/config file

Host bastion
HostName <fqdn of your bastion-elb or bastion-host>
IdentityFile ~/.ssh/bastion_rsa # your bastion private key
User ec2-user

Host private-server
User ubuntu
HostName <private IP of your server in private subnet>
ProxyCommand ssh -q -W %h:%p bastion
IdentityFile ~/.ssh/privateserver_rsa # your bastion private key

Try connecting to your bastion host first

ssh bastion

Then after succeeding, log out and try

ssh private-server

If you are following the principles above, you will retrovision your bastion host (set autoscaling to 1) every time you need it you will run into an issue: ssh will not connect to the server since you kept the name but the server’s identity has changed. This is a security measure that prevents an intruder to redirect traffic to a new server while keeping the name. You can safely delete the old server’s identity from ssh by using:

ssh-keygen -R <bastion-elb>
Posted in DevOps | Tagged , , | Comments Off on Using kops and AWS Bastion Hosts Correctly

Kubernetes: How to Find Out if ABAC or RBAC is Active

If you want to find out what authorization mode your cluster is running use:

kubectl cluster-info dump | grep authorization-mode
Posted in DevOps | Tagged , | Comments Off on Kubernetes: How to Find Out if ABAC or RBAC is Active

Kubernetes: kubectl run with ImagePullSecrets

If just want to quickly start a pod then you can use the following one-liner:

kubectl run <name> -it --restart=Never --image=<public image>

This will limit you to public repos. There is no equivalent to using imagePullSecrets in your yaml files, so this won’t work:

kubectl run <name> -it --restart=Never --image=<private image> --image-pull-secrets=<secret>

However you can use the following workaround (a lot to type):

kubectl run <name> -it --restart=Never --image=<private image> --overrides='{ "apiVersion": "v1", "spec": {"imagePullSecrets": [{"name": "<secret>"}]} }'
Posted in DevOps | Tagged | Comments Off on Kubernetes: kubectl run with ImagePullSecrets

Terminal: Copying Long Text Lines Without Newline Breaks

If you copy long wrapped lines of shell code in Mac Terminal.app you want the pasted line to be broken down into multiple lines exactly where they were in the original script.

When you are using more, less or any other page formatting tool, your lines will be broken up at where they are wrapped on the screen.

I am using cat for displaying short scripts. It does not format lines and keeps newlines where the need to be.

Posted in DevOps | Tagged , | Comments Off on Terminal: Copying Long Text Lines Without Newline Breaks

Kubernetes: Using kubectl with 100s of Clusters

If you are working with many clusters, you need to configure multiple kubectl configs. You can do this using kubectl –kubeconfig= or by merging your config files.

If however you want to keep access to your config files separate, you might use a script that is exchanging your .kube/config symbolic link. I organize my clusters with numerically (based on a AWS account number), so I rename each cluster config file to say 0123 and put it in my .kube directory. I then use kubeconf 0123 to activate that config. Here is the kubeconf script (located in /usr/local/bin):

#!/bin/bash

set -e

if [ $# -ne 1 ]; then
    echo "Usage $0: <config name>"
    exit -1
fi

if [ ! -f ~/.kube/$1 ]; then
    echo "$0: config file ~/.kube/$1 not found"
    exit -2
fi

if [ -L ~/.kube/config ]; then
    rm ~/.kube/config
else
    BACKUP=~/.kube/config.$(date  +%F-%T).backup
    echo "Saving current config file to $BACKUP"
    mv ~/.kube/config $BACKUP
fi

echo "Configuring $1 as config"
ln -s ~/.kube/$1 ~/.kube/config
Posted in DevOps | Tagged | Comments Off on Kubernetes: Using kubectl with 100s of Clusters

Using your Synology Diskstation as a SOHO git Server

Yes, you should use GitHub for your public and private repos (IMHO), but if you just need a small office solution with a few machines and / or users and already have a Diskstation (RAID config and backup recommended) why not install a git server locally.

Install the git server as described in the Synology guide. Create users in DSM that match your workstation users and are assigned to the group Users. In git server DSM manager assign those users to git. Then use the following steps to create repos on the server (guide assumes diskstation is your dns name of the Synology):

ssh admin@diskstation
cd /volume1/Repos # that folder needs to be created within volume manager

git init --bare --shared MyRepo.git

The option bare makes the repo a server representation of a git repository without a working directory and share makes the repo read/write enabled for all Users group members.

On your workstations you can than use the git ssh protocol to clone the created empty repos:

# if your local user and diskstation user match, clone as:
git clone ssh://diskstation/volume1/Repos/MyRepo.git

# if your local user and disk station user do not match, clone as:
git clone ssh://your-user@diskstation/volume1/Repos/MyRepo.git

If you want to push your existing local git repos to the newly created diskstation counterparts use:

cd MyRepo
git remote add origin ssh://diskstation/volume1/Repos/MyRepo.git
git push --set-upstream origin master

After the set-upstream option you can omit it for further pushes.

You can save yourself from typing passwords and increase security by using ssh keys (copy your public key to the diskstation).

Posted in DevOps | Tagged , , | Comments Off on Using your Synology Diskstation as a SOHO git Server