Per the Kubernetes docs, enabling PSPs before you're sure it'll work can be bad news - you can potentially take down workloads by accidentally not authorizing the appropriate ServiceAccounts to use the policies you create. This won't actively take down running Pods, but it will make new Pods of the same spec impossible to schedule (for example, after upgrading a Deployment), leading to errors in your Event logs like:
Error creating: pods "foo-f89759699-" is forbidden: unable to validate against any pod security policy: []
With this said, it's extremely important to test and test and test again if you're enabling PSPs on a cluster already running Pods, especially if it's prod! You'll also need to work with all your development teams to figure out exactly what permissions their containers use already and how to create PSPs that work for them and your organization's security requirements.
...with that out of the way let's do this!
- SSH into the master control plane node
- Review the current ServiceAccounts in
kube-system
- these are CRITICAL to get right before we set up the API to use thePodSecurityPolicy
Admission Controller. Runningkubectl get sa -n kube-system
gets us quite a list... a more helpful command iskubectl get po -n kube-system -o custom-columns=NAME:.metadata.name,SA:.spec.serviceAccount
to see the accounts actively running system Pods. - Review and apply the privileged policy to the cluster
- Note the use of individual ServiceAccounts by
.metadata.name
in the bindings... we shouldn't authorize all ServiceAccounts, because that would authorize the SAs used by our Controllers (e.g. the Deployment controller), which effectively means anyone capable of creating a Deployment can create a privileged Pod - not good! Generally, the only Pods we want to run with permissions like this are administrative ones; Pods that are actively manipulating their hosts (likekube-proxy
, which requires a shared host network in order to modifyiptables
). If you have a customer that requires permissions like this, I'd recommend at a bare minimum giving them their own tainted Node to run their Pods, that is, if you can't convince them they're wrong and their boss signs off on the security risk :) - We prefix the psp name with
zz
so it is resolved appropriately compared to other PSPs. Generally, you want to put highest privileged policies last alphabetically.
- Note the use of individual ServiceAccounts by
- Review and apply the restricted policy
- Since this policy is very restrictive, I see no issue in allowing all SAs access to this policy as a secure "deny-by-default". If someone/something (a controller) has access to create Pods, I feel comfortable with them creating Pods if it adheres to this policy.
- Run
kubectl get po -A -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,SA:.spec.serviceAccount
to list out all serviceAccounts in use for all Pods in the cluster. We need to validate all SAs have access touse
the PSPs we created. - For each PSP, NAMESPACE, and SA, run the following:
kubectl auth can-i use podsecuritypolicy/PSP --as=system:serviceaccount:NAMESPACE:SA
. Do they returnyes
andno
as you'd expect? To be clear - this tells us if the SA even has access to the Policy, not if Pods will pass the policy's requirements.
-
To ensure the API can enforce our PSPs during the authorization phase of a request, we need to set up a new Admission Controller on our API instance.
-
On the master node, edit and save
/etc/kubernetes/manifests/kube-apiserver.yaml
, the static Pod definition responsible for running the API instance as a container managed by Kubelet. You'll look to edit the--enable-admission-plugins
flag, it'll end up looking something like:[...] spec: containers: - command: - kube-apiserver - --authorization-mode=Node,RBAC - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy [...]
- Review and apply the nginx Deployment to the
default
Namespace. This shouldn't successfully schedule, because it doesn't meet the requirements of therestricted
PSP (the only one usable bysystem:serviceaccount:default:default
), namely, the container runs as UID 0. We can validate this by checking out the Events:kubectl get ev
. - Review and apply the apache Deployment to the
default
Namespace. This should successfully schedule, since the bitnami image used is configured to run as UID 1001. We can also validate this in Events:kubectl get ev
.
cd <TOP>/clusters/ubuntu-1804/simple
./down.sh
PodSecurityPolicies are a great way to prevent users of your cluster from doing
malicious things either intentionally or unintentionally, or enforcing security
standards for everyone with a few manifests (e.g. require a particular seccomp
profile for every container). However, you must use caution when enabling this functionality by
setting RBAC permissions correctly, and you must ensure your development teams
have the policies they need to run their workloads. The former is solved by
automations and auditing, and the later solved by communication. New
ServiceAccounts must have the capability to use
at least one PSP. Additionally,
you should consider continually auditing what ServiceAccounts have what PSPs
available, and who has access to these ServiceAccounts (and perhaps often missed,
their Secrets!). Finally, strong communication
with your developers is a must-have with PSPs; if you don't let people know how
they work or the options available, you'll leave developers irritated and confused
over why their Pods won't schedule. All these consequences ultimately lead to a
more secure cluster. Take the time to do it right! :)