The kubelet exposes many useful metrics that can be used for a variety of purposes. These metrics are already being scrapped by components like the Metric Server.
The metrics from the /stats/summary include cpu, memory, rootfs and log metrics for every container running on the node which can be very helpful to track the health of each node.
However, since these metrics can also contain sensitive information about the containers running, access to them have been rightously cordoned off and now the read-only port 10255 has been deprecated.
Table of Contents
For this post, we are going to be working with a single node v1.11 GKE cluster for which we have cluster-admin access.
The first thing before we go down this path is to take a look at the official documentation for Kubelet Authentication and Authorization. We want to make sure that the kubelet in your cluster is configured correctly in the first place.
Below is a snippet from the
/home/kubernetes/kubelet-config.yaml on a GKE
authentication: anonymous: enabled: false webhook: enabled: false x509: clientCAFile: /etc/srv/kubernetes/pki/ca-certificates.crt authorization: mode: Webhook readOnlyPort: 10255
The above configuration provides:
For the purpose of this demonstration, we actually want webhook authorization to be enabled. Follow, the following steps to enable it.
Get the node name
kubectl get nodes
SSH onto the GKE node and become root
gcloud compute ssh <NODE_NAME> --zone <ZONE> sudo -i
Enable webhook authentication
... webhook: enabled: true ...
Restart the kubelet
systemctl restart kubelet
As of the time of writing this post, the kubelet still exposes these metrics
via a read-only port of
10255. This can be conveniently accessed via a
curl if you are on the node or via the following pod config.
Apply the following pod configuration
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: hostNetwork: true containers: - name: nginx image: nginx ports: - containerPort: 10255
Exec onto the pod and install utils
kubectl exec -it nginx /bin/bash apt update && apt install curl -y curl http://localhost:10255/stats/summary
In order to get the kubelet metrics from within the cluster you may use the service account bearer token to authenticate requests with the kubelet. This is possible because we’ve enabled webhook authentication.
--- apiVersion: v1 kind: ServiceAccount metadata: name: nginx --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubelet-api rules: - apiGroups: [""] resources: ["nodes/stats", "nodes/metrics", "nodes/log", "nodes/spec", "nodes/proxy"] verbs: ["get", "list" ] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test-kubelet-api subjects: - kind: ServiceAccount name: nginx namespace: default roleRef: kind: ClusterRole name: test-kubelet-api apiGroup: rbac.authorization.k8s.io --- apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: hostNetwork: true serviceAccountName: nginx containers: - name: nginx image: nginx
Once the configuration above has been applied, exec onto the pod and run the following:
kubectl exec -it nginx /bin/bash apt update && apt install curl -y curl https://localhost:10255/stats/summary -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -k
Y’all should be able to get the same json payload as from the
In order to get access to the kubelet metrics while outside the cluster you may want to use the client certificate authentication method.
kubelet-config.yaml above we can see that the kubelet is configured with a client CA file.
x509: clientCAFile: /etc/srv/kubernetes/pki/ca-certificates.crt
We can simply use the certs in this directory, to access the kubelet metrics.
cd /etc/srv/kubernetes/pki/ curl https://localhost:10250/stats/summary --cacert ca-certificates.crt --cert kubelet.crt --key kubelet.key -k
But this is not terribly useful now is it! So next let’s see how we can use client certificate authentication to access while in the cluster.
As per the docs, any request presenting this client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate.
From the Kubernetes Authentication Docs we can see how x509 certificates map to users and groups.
CN –> User
O –> Group
So let’s find out the common name of the certificate on the GKE worker node.
openssl x509 -in /etc/srv/kubernetes/pki/kubelet.crt -text -noout # Output ... Subject: CN=kubelet ...
This corresponds to:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubelet-api subjects: - kind: User name: kubelet apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: kubelet-api apiGroup: rbac.authorization.k8s.io
If the client cert had a Subject with
O=system:nodes for example, then the
ClusterRoleBinding would be:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubelet-api subjects: - kind: Group name: system:nodes apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: kubelet-api apiGroup: rbac.authorization.k8s.io
TODO SHOW HOW TO USE