Lost Web Passwords After Migrating to New Mac

After migrating to a new Mac, I found out that a ton of my web site passwords were gone or were out of date (I haven’t been using iCloud passwords or any other password app). The Migration Assistant seems to have problems under certain conditions. While I haven’t completely figured out the underlying issue, the only viable solutions were to either copy passwords to a new key chain or to use iCloud passwords temporarily or permanently.

On newer MacOSes password storage focuses on iCloud that is complemented with a local cache and local system passwords. You can find the password cache in key chain files and folders in ~/Library/Keychains. If you deactivate iCloud password use, the cache will be referred to as “Local Items” (if you decide to keep it). Every Mac gets its own UUID folder so if the migration succeeded you will find your old Mac’s folder and a new one for you new Mac. Now even with the files present on the drive the passwords were still missing. Copying those files around won’t help.

After investing a bit of time I decided to activate iCloud on my old Mac and then my on new Mac. The files are synced online and then to the new Mac (sure enough, during the experimentation phase this syncing deleted at least one of my very new passwords…). Since this did the trick, I am now even considering keeping iCloud passwords active and thus simplifying my future migration processes.

If you don’t want to store passwords on the iCloud servers permanently, just deactivate iCloud passwords on ALL devices (not that easy if you use a ton ;). Actually, Apple is suggesting that there is a way to completely skip password storage in the iCloud by not creating a Cloud Security Code. I haven’t tried this and am even unsure if this still works in Sierra.

Posted in Operating systems | Tagged , , , | Comments Off on Lost Web Passwords After Migrating to New Mac

Pricing Strategies for a Fairer Trade

Today I opened a milk carton that was printed with a thank you, because by buying this brand of milk we are supporting dairy hill farmers. Germany is the home some of the hardest discounters (and other retailers) who, acting in the interest of the consumer, use their economies of scale to dictate the purchasing cost for food and other goods. This process has lowered the retail prices in Germany to the lowest in the EU. However, it also jeopardizes farmers and other producers.

What if there is a better way for pricing?

First I would like to look at the reasons why consumers push for such low prices and then propose an alternative pricing model. While retailers act in the interest of consumers to lower the cost of goods, they also want to maximize their return. So the price of the goods gets an additional level of complexity. Competing retailers advertise their prices and consumers can compare the offers. Commodities such as milk are not differentiated, so the comparison is easy. Rational consumers decide for the cheaper offer, since they need to maximize the utility of their income.

However, I think that there is a second interest that the consumer is after, at least me. Since there is a considerable retail margin on top of the goods, it is this we are trying to optimize away. Retailers have long lost their personality and their knowledge but still try to earn the same margin. For example, I know more about the BMW I am pursuing or the camera than my dealers do. All the information is available on the internet and as a buyer I am more interested in the product than the dealer. With all the price transparency would you buy more at a more expensive retailer if the there is zero benefit to it?

In addition, I think that consumers want to support the producers of goods, some of my colleagues are also supportive of retailers (much more so than me).

New pricing model

Discounters that often offer only one type of a commodity, should provide two prices for that good. One that is calculated hard as it is today and one that is a few cents more expensive. That additional margin transparently and to 100% is cashed out directly to the producer of the good. Without even ending on the profit and loss account of the retailer. Technology should make this easy to implement.

One could extend this strategy to other products and retailers, too.

A consumer can then decide if they want or if they can support the producer directly without in any way supporting the retailer (other than purchasing at the store). This would increase fairness to the producers. By having two prices for a single product, the consumer can themselves decide (not through the proxy of the discounter) if the prices are too high. It would still be good for the retailer who would provide the offer, initially differentiating themselves from the competition and later still provide that solid margin for the base good.

It is also fair to the consumer, since not all are equal. Wealthy consumers could easily afford the higher prices, while the poorer could pay the lower prices, knowing that the pricing model is supporting the producers. Both could decide to do the opposite if they so wish. This model would directly show the interest of the consumer to the producer without being proxied through the retailer.

Why not just buy that milk brand described in the introduction? How fair is that? Who gets the margin? Is it fair to the consumer? Where can I get that brand? Transparency, fairness, availability are all reasons to not use that model, if the proposed strategy is implemented.

Posted in Uncategorized | Comments Off on Pricing Strategies for a Fairer Trade

List of Application Dependencies

Docker and containers in general are revolutionising the way we think about application development and operations. As a new packaging technology I think it perfectly defines the interface between an application and the host it will be running on. In order to better understand why this is so, I have looked at this interface and how it captures application dependencies and what it leaves out. Because many different packaging technologies have been used. Here’s a historic list of those that I have ever been using:

  • C64 BASIC source code (actually it is byte code but can be seen as equivalent)
  • C64 6502 machine code
  • Atari ST executable plus RSC file
  • Unix shell source code
  • Unix executable plus libs plus conf
  • NextStep application package in a tar
  • Windows installer package with executables, registry, libs, files, services (a nightmare)
  • Linux RPMs and other packaging formats
  • Java JAR, EAR, WAR
  • Virtual machine
  • Docker containers
  • Unikernels

By looking specifically at the dependencies that applications in these packaging formats need to consider I came up with this list:

  • Hardware
  • Hypervisor
  • Operating system
  • Runtime environment (interpreter, middleware, …)
  • Libraries
  • Filesystem structure, size and specific files
  • Registry and other application configuration
  • System services
  • Networking
  • Other application components
  • Other enterprise services
  • Internet / external services
Posted in DevOps | Tagged | Comments Off on List of Application Dependencies

How to Get Proficient in New Technologies

Breakthrough innovations are changing IT at ever faster speeds. New tools, languages, libraries, virtualisation technologies, workflows appear and get adopted in shorter times. A perfect example of this is Docker.

Keeping pace in that kind of environment affords a structured methodology. Here is what I do when preparing for a new thing, keeping in mind that no matter how cool it is, I will probably be far away from using it at my current paid projects.

  • Read a lot on sites such as InfoQ, where practitioners report.
  • Go to general DevOps conferences such as QCon, Velocity or watch in Youtube.
  • Key people will start to emerge from what you read and hear. Follow their publications and talks.
  • Pick a topic that matches your preferences and that will keep you focused over the next months and hopefully years.
  • Try it out, start with a small startup scenario, prototype it. Ask for help.
  • Drop it if it is shitty.
  • Energy and passion will be key to your success.
  • Tweet and blog about key observations.
  • Use gist for your code snippets and include them on your blog.
  • Create your own cheatsheet.
  • Organise your thoughts by keeping a list of findings, questions, conflicts, quotes, competition, etc.
  • Eat your own dog food by using the new thing for your own mini projects.
  • Build yourself a lab environment. With vagrant and docker this is nowadays easier than ever.
  • If its open source, understand the code, understand the process. If you have the time and energy, get involved.
  • If its closed source, scan the ecosystem for open stuff.
  • Make one conceptual slide per day.
  • Prepare a slide deck for your colleagues at work. Start with Why, show just enough concept then do a demo. Proceed with more breadth.
  • Try to earn some money by giving training on the topic.
  • Speak at conferences, people like practical advice and distrust marketing.
  • Connect with likeminded people. Give more than you take.
  • Make the world a better place.
Posted in Education | Comments Off on How to Get Proficient in New Technologies

Ops as a competitive differentiator?

Operational aspects of software systems are often treated as second class citizens. The point is that e.g. availability is expected to be a given. FRs (functional requirements) are a differentiator and NFRs (non-functional requirements) are not. So why should anyone focus on NFRs?

In a world of mobile and sharing economy NFRs are becoming a necessity for the consumer (enterprise IT guys need to understand the difference between an internal customer and a consumer). However, due to complexity, internal structure and ignorance some enterprises who are creating those new products won’t get Ops right.

The consumer will find itself in a world of imperfection. Focus on Ops will lead to differentiation and therefore competitive advantage. Or at least ineffective Ops might devastate the unprepared!

Posted in DevOps | Tagged , , | Comments Off on Ops as a competitive differentiator?


Die Insolvenz der Reutax AG am 22.03.2013 schreckte die Branche und insbesondere viele Freiberufler auf. Für freischaffende Projektmitarbeiter, die durch die Reutax AG vermittelt wurden, kommen harte Zeiten zu. Bei einem Zahlungsziel von 30 Tagen kann es leicht passieren, dass drei Monatsrechnungen (Januar, Februar, März) nicht ausgezahlt werden können. Wer schon mal durch das Prozedere einer Insolvenz gegangen ist, weiß wie schwierig, langwierig und wenig effektiv das ist. Häufig bleiben weniger als 10% für Lieferanten übrig und das sind freiberufliche Projektmitarbeiter. Es bleibt abzuwarten, wie die Situation im Falle der Reutax AG gelöst wird. Ich drücke die Daumen.

Über die vielen Einzelschicksale durch die Insolvenz der Reutax AG hinaus, wirft der Fall meiner Meinung nach generelle Fragen über die Art der Beauftragung und Bezahlung auf. Das Businessmodell der Personaldienstleister wurde zum Standardmodell für die meisten Unternehmen. Direktbeauftragungen kommen nur noch in seltenen Fällen vor. Das bedeutet, dass das Gros der Freiberufler sich über eine Agentur beauftragen lassen müssen, ob gewollt oder nicht.

Drum prüfe, wer sich ewig bindet. Doch welche Möglichkeit der Prüfung hat man als Freiberufler? Man wählt sich das Ziel-Unternehmen nach wichtigen Gesichtspunkten aus. VW, Daimler, BMW für Automotive und so weiter. Namhafte Unternehmen, die zum eigenen Profil passen und zu groß sind um pleitegehen zu können. Wie sieht das bei einem Personaldienstleister aus? Welche Informationen bekommt man über die? Ich lasse die Frage mal so stehen.

Wie könnte eine Lösung aussehen? Das Standardmodell der Projektmitarbeit sieht vor, dass die Projektvermittlungs-Agentur einen Stundensatz mit dem Projektgeber und einen mit dem Projektnehmer verhandelt. Aus der Differenz ergibt sich der Share für die Agentur. Sämtliche Cashflows werden als kurzfristige Posten in den Aktiva und Passiva in den Büchern geführt und sind damit Bestandteil der Insolvenzmasse.

Eine Möglichkeit wäre, dass der Auftraggeber den Share an die Agentur und den Stundensatz an den Freiberufler bezahlt. Mindestens zwei Gründe sprechen dagegen: Der Auftraggeber bekäme Transparenz über den Share und müsste mehrere Konten führen, was ja gerade ein Vorteil einer Abwicklung über einen Dienstleister ist.

Die zweite Möglichkeit wäre, dass der Auftraggeber den Komplettbetrag an den Freiberufler auszahlt und dieser den Share an die Agentur abführt. Auch das scheint aus oben genannten Gründen abwegig.

Die dritte und meines Erachtens beste Möglichkeit wäre die Einrichtung eines Ringfencing, wie es beispielsweise bei Aktienfonds üblich ist. Der Betrag wird vom Projektgeber an ein spezielles Konto bezahlt, das nicht in den Büchern der Agentur geführt wird, also von einer Insolvenz nicht betroffen ist. Von diesem Konto werden der Agentur der Share und dem Freiberufler sein Anteil ausbezahlt. Das würde zwar die Komplexität bei dem Personaldienstleister erhöhen, aber dazu bekommt dieser ja schließlich auch seinen Share.

Man kann sich auch so behelfen: Personaldienstleister wählen, der an der Börse ist und selbst Hedging betreiben, also Put-Optionen kaufen.

Posted in Strategy | Tagged | Comments Off on Reutax-Insolvenz

QCon 2013 keynote by Barbara Liskov

Barbara Liskov, professor at MIT, presented an IT-history-based keynote at QCon, called “The Power of Abstraction”. It was fun and painful to be reminded of the IT topics of 70s:

  • Gotos
  • Top-down structural design
  • Modules
  • Abstract data types
  • Algol, Simula, CLU

Specifically, her work on CLU, a programming language used mainly for research, with its concepts of inheritance, polymorphism, iterators, multiple return values, explicit type casting and exception handling have influenced the development of OOP and popular languages like Java and Python amongst others. Graham Lee has collected all documents referenced in the talk in his post.

Barbara Liskov is the second woman to receive the Turing Award (2008).


Above is the content map of the presentation created by Heather Willems in parallel to the keynote.

Posted in QCon 2013 | Tagged , , , | Comments Off on QCon 2013 keynote by Barbara Liskov

Continuous delivery

ThoughtWorks’s Vladimir Sneblic today held the excellent Continuous Delivery course at QCon 2013 together with his colleague. Expanding on the well-known must read from Jez Humble, the tutorial included anectodal stories, case examples and professional materials.

In a nutshell, Continuous Delivery proposes to improve the painful – at least in most larger companies I know of – process of software delivery from development to operations by increasing delivery / deployment frequency. There are multiple reasons to do this. Massively increasing delivery frequency:

  • decreases the increment size, reducing the risk resulting from the scope of the change,
  • forces to focus on the delivery process and drives automation efforts,
  • uncovers the necessity to cooperate especially between Dev and Ops,
  • shows the urgency to improve the structural deficiency in Ops,
  • and ultimately reminds of the importance of Ops in the context of software development.

However, implementing Continuous Delivery is a major change effort. The following is a list of things to implement, supposing that an agile software development is already in place:

  • Continuous integration
  • Trunc is always production ready
  • Automated testing
  • DB migration tools
  • Agile infrastructure
  • Comprehensive confugration management (Everything is in version control)
  • DevOps

In larger corporations this results in a major organizational transformation to be pitched at the level of the corporate CIO.

Posted in Continuous delivery | Comments Off on Continuous delivery

The Three Ages

Today Dan North presented his “Three Ages” core pattern (or as I would say, business model) at the QCon 2013. I like how it succinctly categorizes the phase of, say, the adoption a methodology in an organization.

1. Explore (maximize discovery)
2. Stabilize (minimize variance)
3. Commoditize (maximize efficiency)

The class applied the model and some interesting statements ensued:

Business is often in explore, however, they require IT to be in stabilize.
Dan recounted the story of an IT ops guy who is constantly driving for commoditization. I would love to have this guy in my team!

In a project I know of, the teams are trying to reach stability. Business however, try to force the maximization of efficiency of IT. That destabilizes the teams into exploration.

There is no shortcut to the three steps. Follow the steps in the given order and create a culture of continuous improvement.

Posted in QCon 2013 | Comments Off on The Three Ages

Rezension: NoSQL Distilled

NoSQL ist in aller Munde, daher fragt man sich als langjähriger SQL-Nutzer was den Hype dieser Datenbanken ausmacht. Wahrscheinlich dürfte beim Erfolg der unterschiedlichen NoSQL-DBs sein, dass der Trend den Hype überdauern wird. NoSQL grenzt sich zwar bereits beim Namen gegenüber dem Relationalen Datenmodel ab, allerdings reicht eine Negativdefinition für ein Verständnis nicht aus. Das Buch NoSQL Distilled verspricht Abhilfe.

Es geht produktunabhängig auf Key-Value, Dokument, Column und Graph DBs ein. Sehr gut und zugänglich wird meines Erachtens auf Konsistenz, Persistenz, Skalierung, Verteilungsmodelle, Map-Reduce und Schemamigration eingegangen. Immer stehen die Produktivität des Entwicklers und die Anforderungen von 24×7 und Big Data im Vordergrund. Die nur ca. 150 Buchseiten sind meines Erachtens eine Wohltat in der Welt der Informationsüberflutung. Das recht komplexe und oft theoretisch dargestellte Gebiet der Verteilten Systeme ist sachlich, präzise, frei von akademischer Übertreibung präsentiert und daher hervorragend für den “Practitioner” bzw. “Professional” geeignet.

Während man Martin Fowler nicht mehr vorstellen muss, möchte ich Pramod Sadalage als Koautor des Buchs “Refactoring Databases” hervorheben (falls man Ihnen vormachen will, dass Schemaänderungen während des Betriebs nicht möglich sind, empfehle ich die Lektüre wärmstens).

Das Buch ist ein klarer Kauf, insbesondere wenn man sich, z.B. als IT-Manager, schnell einen Überblick über das interessante Wissensgebiet aneignen möchte. Wer schon mit einem der NoSQL-Produkte in Berührung gekommen ist, kann sein Wissen in den Kontext stellen. Programmierern bestimmter NoSQL-Dialekte wird das Buch sicher zu wenig tiefe bieten und auch theoretisch ist es auf Grund von rein praktischem Ansatz eher als Einstimmung zu bezeichnen. Es ist aber genau, was der Titel verspricht, nämlich ein Destillat des NoSQL-Paradigma.

Posted in Books | Tagged , | Comments Off on Rezension: NoSQL Distilled