Blog

How to Inject Secrets from AWS, GCP, or Vault Into a Kubernetes Pod

In the world of Kubernetes, we try to automate and minimize code duplication. Consuming secrets from a secret manager in Kubernetes should be the same way. Here’s how to do it.

This blog post builds off of my previous blog post about automating the injection of secrets into Kubernetes Pods when using Vault.

Secrets on Kubernetes are still base64-encoded plain text.

While you can encrypt secrets at rest, they are not encrypted on the Pod itself, and the encryption key is global for Etcd (where the secrets are stored), which creates a risk in a multi-tenant cluster where all secrets are encrypted with the same global key.

1 pylutmoqk4ivd vequwuda
Securely Inject Secrets from AWS, GCP, or Vault into a Kubernetes Pod

GCP KMS allows you to create an envelope application layer key that you can use to solve this issue. But still, at the end of the path, the Pod is consuming secrets in a plain text format.

In addition to Hashicorp Vault, there are the secret managers of AWS and GCP.

While AWS and GCP do not offer the many features of Vault, such as authentication backends, secret access auditing, revocation or even dynamic secrets, they do offer a similar way to store secrets with versions.

Challenge: Consuming Secrets on Kubernetes Pods

Consuming these secrets on a Kubernetes Pod introduces a challenge, as you need to write a custom wrapper script to handle authentication and secret consumption as env vars. Then, you must replace your wrapper script with the wrapper process so it inherits all env vars and becomes PID: 1, which is critical for handling termination signals properly in containers.

In the world of Kubernetes, we try to automate and minimize code duplication that will be needed for managing these types of supporting tasks (another example would be automating DNS and certificates). Consuming secrets from a secret manager in Kubernetes should be the same.

Some companies prefer not to manage a Vault installation, and for this reason alone, they can go with AWS’ or GCP’s secret managers.

Your Must-Know Kubernetes Webhooks

Kubernetes has a built-in admission controller that can intercept requests to the Kubernetes API server prior to the persistence of the object, but after the request is authenticated and authorized.

There are two special controllers built into the kube-apiserverbinary:

  • MutatingAdmissionWebhook
  • ValidatingAdmissionWebhook

These execute the mutation and validation of requests after authentication and authorization.

Mutating controllers may modify the objects they admit while validating controllers may or may not admit an object to the API server.

These webhooks objects can be installed just like a normal object in Kubernetes via kubectl(they consist of the https address for the actual webhook logic).

Once installed, every request will go through this webhook logic.

Secrets Consumer Mutation Webhook

The following code and knowledge are heavily based on the excellent work Banzai Cloud has done, and I’d like to give them all the credit (you should read their blog, it’s excellent!).

The major differences between Banzai Cloud’s webhooks and mine are:

  1. Ability to use AWS, GCP, and Vault secret managers.
  2. Removal of consulTemplates and dynamic secrets feature from the Banzai Cloud webhook.
  3. Ability to authenticate Vault not only with a Kubernetes backend but also with a GCP backend.
  4. Use explicit secrets from the secret manager or get all secrets.
  5. Use secret path as directory and fetch all secrets below it (limited to the first level).
  6. Use secret names as keys where the secret path ends with / and each value is a single value:
    The use case for it is to better handle secrets contents — minimizing the error-prone of adding or editing a secret, as well as reducing the number of steps required (read, append, update).
  7. Use wildcard secret path, for example, if the secret path ends with / and there are a few secrets that start with db_ — the secret path can be /secret/path/db_*
  8. Auto-detect if the secret is KV1 or KV2 (no longer need to prefix /data in your secret path).

How the Mutation Webhook works

Every new request to the API server will go through this webhook.

The webhook will check if the object is a Pod and only mutate the object if it contains specific Pod annotations for example:

When it finds these annotations, it will modify the Pod object as follows:

  • Add a shared in-memory volume
  • Add an init container with the vault-env binary and a command to copy vault-env to that shared volume
  • Change the Pod command to be vault-env <original-command><orignal args>
  • Add vault environment variables (ROLE, CA_PATH, SECRET_PATH) for easier vault operations.

Important to note:
The webhook does not add the secrets as env vars onto the Pod. No confidential data ever persists on the disk, not even temporarily, or in etc. not can they be viewed on the modified Pod object.

All secrets are stored in memory, and only visible to the process that requests them

Using Charts Without Explicit container.command and container.args

The webhook is now capable of determining the container’s ENTRYPOINT and CMD with the help of image metadata queried from the image registry. This data is cached until the webhook Pod is restarted. If the registry is publicly accessible (without authentication) you don't need to do anything, but if the registry requires authentication the credentials have to be available in the Pod's imagePullSecrets section.

Almost every chart allows setting your podAnnotations so you can read the available annotations on the webhook README page.

Consuming Secrets Outside of Kubernetes

The secrets-consumer-env tool can be run by setting a few env vars for it to execute — see the README.md for all available options.

Installing the Webhook

Before you install this chart you must create a namespace for it. This is due to the order in which the resources in the charts are applied (Helm collects all of the resources in a given chart and it’s dependencies, groups them by resource type, and then installs them in a predefined order (see here — Helm 2.10).

The MutatingWebhookConfiguration gets created before the actual backend Pod which serves as the webhook itself. Kubernetes would like to mutate that Pod as well, but it is not ready to mutate yet (infinite recursion in logic).

export WEBHOOK_NS=`<namepsace>`
WEBHOOK_NS=${WEBHOOK_NS:-vault-secrets-webhook}
kubectl create namespace "${WEBHOOK_NS}"
kubectl label ns "${WEBHOOK_NS}" name="${WEBHOOK_NS}"

Get the chart:

git clone https://github.com/innovia/secrets-consumer-webhook.git

Install the chart:

helm upgrade --namespace ${WEBHOOK_NS} --install secrets-consumer-webhook secrets-consumer-webhook --wait

NOTE: --wait is necessary because of Helm timing issues, please see this issue.

About GKE Private Clusters

When Google configures the control plane for private clusters, they automatically configure VPC peering between your Kubernetes cluster’s network in a separate Google managed project.

The auto-generated rules only open ports 10250 and 443 between masters and nodes. This means that in order to use the webhook component with a GKE private cluster, you must configure an additional firewall rule to allow your masters CIDR to access your webhook Pod using the port 8443.

You can read more information on how to add firewall rules for the GKE control plane nodes in the GKE docs.

Explicit vs. Non-explicit (get all) Secrets

You have the option to select which secrets you want to expose to your process, or get all secrets for secret path/name.

To explicitly select secrets from the secret manager, add an env var to your Pod using the following convention:

env:
- name:  <variable name to export>
  value: vault:<vault key name from secret>

Setting up Kubernetes Backend Authentication with Vault

Vault can authenticate to Kubernetes using a Kubernetes service account.

It does this by another service account called vault-reviewerwith an auth-delegator permission, which allows it to pass another service account token for authentication to the Kubernetes master.

Once the authentication to Kubernetes is successful, Vault returns a client token that can be used to login to Vault. Vault will check a mapping between a vault role, service account, namespace, and the policy to allow/deny the access.

This vault-reviewerservice account token will be configured inside the vault using vault CLI.

Let’s create the service account for that vault-reviewer.

Link to original gist

Please note; if you have set up Vault on any other namespace, make sure to update this file accordingly.

kubectl apply -f vault-reviewer.yaml

Enable the Kubernetes auth backend:

# Make sure you are logged in to vault using the root token
$ vault login
Token (will be hidden):
$ vault auth enable kubernetes
Success! Enabled kubernetes auth method at: kubernetes/

Configure Vault with the Vault-reviewer Token and CA:

Note; if you setup Vault on any other namespace set the -n <namespace> flag after each kubectl command.

$ VAULT_SA_TOKEN_NAME=$(kubectl get sa vault-reviewer -o jsonpath="{.secrets[*]['name']}")
$ SA_JWT_TOKEN=$(kubectl get secret "$VAULT_SA_TOKEN_NAME" -o jsonpath="{.data.token}" | base64 --decode; echo)
$ SA_CA_CRT=$(kubectl get secret "$VAULT_SA_TOKEN_NAME" -o jsonpath="{.data['ca\.crt']}" | base64 --decode; echo)
$ vault write auth/kubernetes/config token_reviewer_jwt="$SA_JWT_TOKEN" kubernetes_host=https://kubernetes.default kubernetes_ca_cert="$SA_CA_CRT"
Success! Data written to: auth/kubernetes/config

An example of role mapping to a service account and namespace in Vault

vault write auth/kubernetes/role/tester 
 bound_service_account_names=tester 
 bound_service_account_namespaces=default 
 policies=test_policy 
 ttl=1h

Setting Up GCP Backend Authentication with Vault

Enable the Google Cloud auth method:

$ vault auth enable gcp

Configure the auth method credentials:

$ vault write auth/gcp/config \     credentials=@/path/to/credentials.json

If you are using instance credentials or want to specify credentials via an environment variable, you can skip this step.

Create a named role:

For an iam-type role:

$ vault write auth/gcp/role/my-iam-role \
   type="iam" \     
   policies="dev,prod" \     
   bound_service_accounts="[email protected]"

For a gce-type role:

$ vault write auth/gcp/role/my-gce-role \     
    type="gce" \     
    policies="dev,prod" \     
    bound_projects="my-project1,my-project2" \     
    bound_zones="us-east1-b" \     
    bound_labels="foo:bar,zip:zap" \     
    bound_service_accounts="[email protected]"

Required GCP Permissions

Vault Server Permissions

For iam-type Vault roles, Vault can be given the following roles:

roles/iam.serviceAccountKeyAdmin

For gce-type Vault roles, Vault can be given the following roles:

roles/compute.viewer

If you instead wish to create a custom role with only the exact GCP permissions required, use the following list of permissions:

iam.serviceAccounts.get
iam.serviceAccountKeys.get
compute.instances.get
compute.instanceGroups.list

Permissions For Authenticating Against Vault

Note that the previously mentioned permissions are given to the Vault servers. The IAM service account or GCE instance that is authenticating against Vault must have the following role:

roles/iam.serviceAccountTokenCreator

feel free to check the source code:

https://github.com/doitintl/secrets-consumer-webhook.git

Subscribe to updates, news and more.

Related blogs