Värde Utrusta Wall Cupboard Hinge 30min Fix Ikea Pro Hack

Ikea offered a nice modular kitchen in their program from end of 1990s and 2000s.

Unfortunately, they didn’t support it well. I assume it was too expensive to produce. Ikea also resorted to cheap technical solutions. For example, the 70cm wall cupboards that were available in variants with or without glas doors and in white or birch had a very poor solution to hold open the horizontal door. It was a damper that just had to fail eventually.

I replaced the damper multiple times and the replacements got quite expensive on eBay, since there were no original parts available. So, we gave up in the end: The new kitchen is in the early planning stage.

However, since I had some spare small Utrusta hinges (https://www.ikea.com/us/en/p/utrusta-small-hinge-for-horizontal-door-white-90265736/) with some research I managed to replace the dampers. This is how the result looks like, before applying finishing touches:

The most difficult part of the story is that there are no mounting holes prepared to position the hinge correctly. See the red marks in the image below to find the positioning protrusions / bolts of the hinge. We need to drill two per holes per side at the correct positions using the correctly sized wood drill (only 3mm deep!).

Pick the correctly sized drill for the protrusions. Hint: Use the pre-drilled holes of the cabinet to double-check the drill size (size is 5 in Germany, whatever that is elsewhere…).

Procedure

WARNING: You should know how to drill. Apply common sense. Don’t hold me responsible, I am just describing my solution.

  • Prepare a paper template (use thick paper) with two holes, one 32mm from the top and the other at 82mm from top, both 52mm from the front (see image below).
  • Unmount the door by detaching the damper and the original hinges.
  • Hold the template to the top left and top right inner side of the cupboard and punch-mark two holes (with the wood drill, for example).
  • Drill a 3mm deep hole in each of the 4 punch-marks using a 5er wood drill. Do not apply too much pressure, you don’t want to drill a hole through the cabinet!
  • Attach the Utrusta hinge by pushing the two protrusions on the side of the hinge into the holes. If you worked precisely, the hinge should already be supporting itself.
  • From here on just follow the Utrusta mounting instructions.

And there you have it, this finally fixes the damper problem, works great with soft close and looks professional.

Posted in Uncategorized | Tagged , , , , , | Comments Off on Värde Utrusta Wall Cupboard Hinge 30min Fix Ikea Pro Hack

Enter Docker VM on MacOS Catalina (SSH, xhyve)

As you might know, Docker containers need a Linux kernel in order to run (for Linux containers that is). So how does this work on a Mac, then, it doesn’t have a Linux kernel? Docker Desktop for MacOS will install a small Linux OS on your Mac using the MacOS-built-in xhyve hypervisor.

So if you are curious and want to poke around in Docker, you’ll need to enter the VM. Say you want to run ps on the host in order to see the individual containers running as processes, to watch how Docker images are stored or as I will show here: to see how containers are using overlay2 to merge the container filesystem with the underlying image filesystem then run this on your Mac after you started a container to get the container path:

docker inspect --format='{{.GraphDriver.Data.UpperDir}}' <container>

Now switch to the Linux VM:

screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty

Then cd into the path returned by inspect. You will see the changes the container made into its filesystem or run ps to see the “real” PID of the container process.

In order to exit the VM again hit CTRL-A CTRL-\ and then y to confirm the exit.

Posted in Development, DevOps, Operating systems | Tagged , , , | Comments Off on Enter Docker VM on MacOS Catalina (SSH, xhyve)

Xcode 11 Unable to Boot Simulator Diehard

Xcode on my Macs had a persistent problem after updating to Xcode 10: the simulators wouldn’t work anymore. Lucky me, I was busy with AWS and Kubernetes projects… Updating to Xcode 11 didn’t fix anything. Searching the web didn’t help a ton, since there seem to be many similar problems and simpler solutions, that unfortunately didn’t help at all:

https://stackoverflow.com/questions/52778170/xcode-10-0-simulator-unable-to-boot
https://stackoverflow.com/questions/24033417/unable-to-run-app-in-simulator-xcode-beta-6-ios-8

The solution was to run the following command to reset the developer’s path:

sudo xcode-select --reset

Not sure if the reboot I did after that was actually necessary, but I did it anyway.

Moreover, while the installation of Xcode 11 went smoothly the update to 11.1 failed multiple times. Had to delete Xcode and reinstall cleanly. Now everything runs.

Posted in Development, Uncategorized | Tagged , , | Comments Off on Xcode 11 Unable to Boot Simulator Diehard

EBS Default Encryption Enables Launching Encrypted Instances From Unencrypted AMI Snapshots

Previously (before end of May 2019), you had to encrypt the snapshot backing an AMI if you wanted to launch an instance with encrypted root volumes. This had some consequences for sharing AMIs: not only had AMIs to be shared but also their backing snapshot. Once you established this, you had to copy the snapshot with encryption selecting a specific KMS key. Therefore, for each key you would have to create a new snapshot copy and thus a new AMI.

Now, the situation is much simpler. You can create root volume encrypted instances from any AMI, be it encrypted or not, and devise the use of a specific KMS key at launch time. You can do this by either setting “EBS Default Encryption” and therefore encrypting all your volumes by default (hopefully the policy you want anyway) or by specifying encryption and the corresponding KMS key at the launch of the instance.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIEncryption.html

Posted in AWS, Cloud | Tagged , , , | Comments Off on EBS Default Encryption Enables Launching Encrypted Instances From Unencrypted AMI Snapshots

AWS IAM Version and SID traps

In a IAM policy JSON, if you omit the version string, you are heading for trouble:

If you do not include a Version element, the value defaults to 2008-10-17, but newer features, such as policy variables, will not work with your policy. For example, variables such as ${aws:username} aren’t recognized as variables and are instead treated as literal strings in the policy.

Therefore, “you should always include a Version element and set it to 2012-10-17.” Seen in Policy Ninja 2016.

Also interesting is the SID, that is an optional identifier, but if it is present the SID MUST be unique in the policy. So you can throw it out and don’t mind pasting a ton of statements around. Where it really becomes tricky is when you are using Ansible AWS cloud module such as lambda. There the sid is used as a statement_id that is not optional, Ansible uses it to detect create vs. update, i.e. if a specific sid is present, it will just update it instead of inserting it. Typical, configuration management behavior. But now, no one is complaining if you want to insert a new permission statement but are using the same statement_id…

Posted in AWS, Uncategorized | Tagged , | Comments Off on AWS IAM Version and SID traps

Mac: Make Your External Drive Read-Only

I have a ton of music software content (samples, sounds etc) that makes it impossible to take with me on my MacBook Pro directly. Therefore, I bought an external USB-C Samsung T5 mini SSD that is super-fast and has a small form factor. I then moved my static sound library (Alchemy, Apple Loops, EXS samples, Native Instruments Komplete etc) to that disk and created symbolic links to that SSD.

Now, when I am in the hotel, I just connect that SSD to get my full library on the go. Very convenient and it doesn’t interfere with my business stuff. However, there is an issue: The drive can get disconnected at times (poor USB-C connector on Samsung cable) and sometimes when moving around with the laptop I just want to unplug that drive quickly without that annoying message from MacOS, that I wrongly disconnected that drive. And they are right, OSes nowadays continuously write stuff to drives and disconnecting without first ejecting the drive can result in corrupted files.

The solution for my use case is to make the drive read-only. Since I only use it for static files, this is just perfect. Now I can unplug the drive whenever I want accidentally or not, MacOS stays quiet. In addition, I cannot by chance overwrite files or move stuff around. And if I want to write to the disk, it is just a quick edit (change the ro below to rw).

How to do it? With the external drive inserted, run

diskutil info -all

and pick Type, Volume UUID and Volume Name of your drive. In my case this is:

   Volume Name:              T5
   Volume UUID:              20B59A9D-5A8B-4B57-B4F7-03123445B
   Type (Bundle):            apfs

Use vifs to edit your /etc/fstab (the unix way to mount file systems) and insert these two consecutive lines (per volume):

UUID=20B59A9D-5A8B-4B57-B4F7-03123445B none apfs ro
LABEL=T5 none apfs ro

The ro stands for read-only. Now eject the drive once more and insert it. If you run diskutil again and you can check to set that the drive is no writable. You’re good to go.

Ping me on Twitter if you need details. @gekart

Posted in Music Production | Tagged , , , | Comments Off on Mac: Make Your External Drive Read-Only

Silencing your iMac: Decrease Minimum Fan Speed to 1000rpm

I finally replaced my hard drive in my 2015 iMac 5k with a Samsung 860 SSD. Not an easy decision since you need to cut the Mac open. However, apart from the HD being way to slow, it is also very noisy: there is a hiss type of sound and rumble from the rotational mass. Unbearable in a quiet environment.

After installing the SSD, the speed and noise improvement are tremendous. But then, you hear the fan “idling” at 1200rpm resulting in quite a strong airflow and distracting noise no matter what your system load is. It seems that on my low spec iMac, the regular (Apple auto-controlled) rpms never get regulated above those 1200rpm, even when running Unreal 3D benchmarks. So the default 1200rpm can be seen as a static value: it doesn’t go above or below that no matter what you do to the system. I downloaded Macs Fan Control, but you can’t reduce rpm below the minimum 1200 (but you can go up to 2700). Luckily I found a way around that and reduced the minimum to 1000rpm, which is considerable less noisy! It makes all the difference to me.

What you need is a utility that is included in the smcFanControl tool. Its called smc and is also available separately on GitHub. From here on you must read the following first:

Warning
-------
This tool will allow you to write values to the SMC 
which could irreversibly damage your computer.  
Manipulating the fans could cause overheating and permanent damage.

USE THIS PROGRAM AT YOUR OWN RISK!

You can and will destroy stuff and I won’t take any responsibility, but here is what worked for me (assuming that you copied smcFanControl to Applications directory):

In the Terminal app type:

/Applications/smcFanControl.app/Contents/Resources/smc -k F0Mn -w 0fa0

This writes the for 1000 (0fa0) to the smc key F0Mn (minimum speed of fan 0). If you need further explanation, the GitHub repo provides a detailed readme.

After that, use Macs Fan Control to reduce the speed manually to 1000rpm. Reading out the smc info now looks like this:

/Applications/smcFanControl.app/Contents/Resources/smc -f
Total fans in system: 1

Fan #0:
    Fan ID       : Main 
    Actual speed : 999
    Minimum speed: 1000
    Maximum speed: 2700
    Safe speed   : 0
    Target speed : 1000
    Mode         : forced

This results in almost complete silence in my environment. Yet, there is still considerable airflow and everything looks and feels great. What a relief!

Update 1:

Lowering the minimum speed below 1000rpm does not yield the desired results. It seems that the fan and its control logic can’t go well below that. By setting the minimum speed to 800rpm, the lowest ever achieved rpm that I got is around 960rpm.

Update 2:

After rebooting, the minimum fan speed got reset (by the system supposedly) to the original 1200rpm. However, after starting Macs Fan Control, it writes my desired target speed to smc so that still works! Great! Here is how that looks like:

/Applications/smcFanControl.app/Contents/Resources/smc -f
Total fans in system: 1

Fan #0:
    Fan ID       : Main 
    Actual speed : 959
    Minimum speed: 1200
    Maximum speed: 2700
    Safe speed   : 0
    Target speed : 800
    Mode         : forced
Posted in Operating systems | Tagged , , , | Comments Off on Silencing your iMac: Decrease Minimum Fan Speed to 1000rpm

Changing Encryption of PersistentVolumes in Kubernetes

Docker containers are treated as ephemeral, specifically when they are managed in a Kubernetes cluster. Starting and restarting in the cluster is done automatically and works as a breeze. Things get more complicated as soon as you decide to keep state in your cluster. This is typically done by attaching volumes into a cluster (writing to a host mounted volume is an absolute no-no, except for experimental purposes).

On AWS you would typically use dynamic PersistantVolumes bound to EBS or EFS. Once you attach a volume its AWS volume ID gets tracked by Kubernetes.

In the output below you can see that the volume ID is aws://eu-central-1c/vol-a559b271b5fca7dfa:

$ kubectl get pv pvc-a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubernetes.io/createdby: aws-ebs-dynamic-provisioner
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs
creationTimestamp: 2018-02-06T15:17:17Z
labels:
failure-domain.beta.kubernetes.io/region: eu-central-1
failure-domain.beta.kubernetes.io/zone: eu-central-1c
name: pvc-a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a
resourceVersion: "125511"
selfLink: /api/v1/persistentvolumes/pvc-a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a
uid: a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a
spec:
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
fsType: ext4
volumeID: aws://eu-central-1c/vol-a559b271b5fca7dfa
capacity:
storage: 20Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: mysql-pv-claim
namespace: default
resourceVersion: "22108"
uid: a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a
persistentVolumeReclaimPolicy: Delete
storageClassName: gp2
status:
phase: Bound

Once that volume has been created you loose the capability to control its encryption status or the ability to change its KMS key.

Why would you want to do such a thing in the first place? Say for example, you created a volume unencrypted, written a lot of application state and now want to encrypt the volume. Or you have encrypted the volumes but now decide to use KMS or CMK and therefore need to change the KMS key of that volume. An simple way would be to create a second volume in Kubernetes and then copy the files over. However, when you do not have access to the applications that are using these volumes, this might not be that easy, specifically if you have hundreds of volumes.

In a pure AWS world, a process exists for this:

  • detach the volume (by stopping the instance),
  • create a snapshot,
  • copy snapshot using the correct encryption method,
  • create a volume out of snapshot in the correct parameters (specifically AZ) and
  • attach it to the instance.

For volumes that are attached to nodes (not masters), you can stop the instances. But before you do that, suspend the corresponding ASG (Auto Scaling Group) launch event. Otherwise the ASG will start a new instance and probably mount the EBS volume there.

If you terminate the instances (as you would probably should in an immutable world), you need to make sure that the ASG will start the new ones in the same AZ. I had a case with 2 nodes in 3 AZs and sure enough the round robin ASG would start the new instance in the previously unused AZ. The problem is that EBS volumes are AZ-bound and in that case the volume couldn’t be attached to that new instance since they were in 2 different AZs.

However, this will change the volume ID that is tracked by Kubernetes. Here our beloved kubectl patch will come handy:

kubectl patch pv pvc-a0f6a3ea-ab5a-a1ea-a2ba-a2809894a24a -p '{"spec":{"awsElasticBlockStore":{"volumeID":"aws://eu
-central-1c/vol-a559b271b5fca7dfa"}}}'

You will also need to copy all the tags to the new volume and modify the tags on the old volumes.

In summary, on Kubernetes I have successfully tried the following procedure:

  • gracefully shutdown the persistent application,
  • pick a volume to exchange and its corresponding node (pv/volumeID and no/volumesAttached,
  • suspend Launch on node ASG,
  • terminate the instance (volume becomes available),
  • check application pods are pending,
  • create a snapshot,
  • copy snapshot using the correct encryption method,
  • create a volume out of snapshot in the correct parameters (specifically AZ, size),
  • copy all tags from original volume (and modify tags on original volume),
  • kubectl patch the PVs volumeID,
  • resume Launch on node ASG,
  • wait for a new instance (check it is in the same AZ as the old one),
  • check pods are running and application is working fine.

Only use procedures like this in your experimental environments. I take no responsibility for any damage!

Posted in Uncategorized | Comments Off on Changing Encryption of PersistentVolumes in Kubernetes

List Kubernetes Master Nodes

You can use the command below to show all nodes that are acting as master on your cluster. This is particularly useful when dealing with kops and some versions of canal networking that (accidentally) manipulate the status of the nodes.

kubectl get nodes -o json | jq -r '.items[] | select(.spec.taints[]?.key=="node-role.kubernetes.io/master") | .metadata.labels."kubernetes.io/hostname"'

If you have a setup where masters are clearly separated from nodes, no user workloads should be scheduled on that masters (because for example, the security groups will prevent application communication between masters and nodes). In this case, you may want to make sure to check that the NoSchedule effect is applied as a taint:

kubectl get nodes -o json | jq -r '.items[] | select(.spec.taints[]?.key=="node-role.kubernetes.io/master" and .spec.taints[]?.effect=="NoSchedule") | .metadata.labels."kubernetes.io/hostname"'
Posted in DevOps | Tagged , | Comments Off on List Kubernetes Master Nodes

Using kops and AWS Bastion Hosts Correctly

You have correctly provisioned your AWS infrastructure using AWS Bastion Quickstart or with kops and want to connect to your private instances using the bastion hosts.

First some principles:

  • Terminate your bastion host after using it (set autoscaling to 0).
  • Do not use your bastion host as a store, especially not for anything security relevant
  • Specifically never copy any private key to the bastion host
  • Close down the security group to your specific IP
  • After using the bastion host, close down the security group completely

This may all sound paranoid, but even if some threats are theoretical at best, your security compliance group will come hunting for you if you don’t follow the steps.

To connect to any server in the private subnets, you can use ssh proxying. Add the following snippets to your ~/.ssh/config file

Host bastion
HostName <fqdn of your bastion-elb or bastion-host>
IdentityFile ~/.ssh/bastion_rsa # your bastion private key
User ec2-user

Host private-server
User ubuntu
HostName <private IP of your server in private subnet>
ProxyCommand ssh -q -W %h:%p bastion
IdentityFile ~/.ssh/privateserver_rsa # your bastion private key

Try connecting to your bastion host first

ssh bastion

Then after succeeding, log out and try

ssh private-server

If you are following the principles above, you will retrovision your bastion host (set autoscaling to 1) every time you need it you will run into an issue: ssh will not connect to the server since you kept the name but the server’s identity has changed. This is a security measure that prevents an intruder to redirect traffic to a new server while keeping the name. You can safely delete the old server’s identity from ssh by using:

ssh-keygen -R <bastion-elb>
Posted in DevOps | Tagged , , | Comments Off on Using kops and AWS Bastion Hosts Correctly