Under some circumstances Kubernetes is forgetting its master nodes (kops version < 1.8.0 and canal). If this happens, your masters will get scheduled full of pods. If this is the case the following command might help you:
kubectl taint master1.compute.internal key=node-role.kubernetes.io/master:NoSchedule
This command seems to be buggy at times, complaining about wrong characters in the string. If that’s the case then try this nice patch:
kubectl patch node master1.compute.internal -p '{"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}]}}'
After you have correctly set the taints with above command, you can terminate workload pods and they will get restarted on non-masters.
BTW, if you do not take the measures above, not only will pods get scheduled on your masters which might get problematic, but in addition any differences between your master and node will catch you cold. E.g. you might open a security group on your nodes but not on your master. If a pod get scheduled to run on the master it will not able to communicate as desired.