You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> kubectl get --raw "/api/v1/nodes/k8s-dev-worker/proxy/metrics/resource"
# HELP container_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the container in core-seconds
# TYPE container_cpu_usage_seconds_total counter
container_cpu_usage_seconds_total{container="kindnet-cni",namespace="kube-system",pod="kindnet-ndczz"} 1.121325 1735032838055
container_cpu_usage_seconds_total{container="kube-proxy",namespace="kube-system",pod="kube-proxy-l5jhs"} 1.100665 1735032838936
container_cpu_usage_seconds_total{container="metrics-server",namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 7.333964 1735032837430
# HELP container_memory_working_set_bytes [STABLE] Current working set of the container in bytes
# TYPE container_memory_working_set_bytes gauge
container_memory_working_set_bytes{container="kindnet-cni",namespace="kube-system",pod="kindnet-ndczz"} 3.2923648e+07 1735032838055
container_memory_working_set_bytes{container="kube-proxy",namespace="kube-system",pod="kube-proxy-l5jhs"} 4.0628224e+07 1735032838936
container_memory_working_set_bytes{container="metrics-server",namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 4.7026176e+07 1735032837430
# HELP container_start_time_seconds [STABLE] Start time of the container since unix epoch in seconds
# TYPE container_start_time_seconds gauge
container_start_time_seconds{container="kindnet-cni",namespace="kube-system",pod="kindnet-ndczz"} 1.7350309825441425e+09
container_start_time_seconds{container="kube-proxy",namespace="kube-system",pod="kube-proxy-l5jhs"} 1.7350309819809804e+09
container_start_time_seconds{container="metrics-server",namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 1.7350309993126562e+09
# HELP node_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the node in core-seconds
# TYPE node_cpu_usage_seconds_total counter
node_cpu_usage_seconds_total 71.41304 1735032832343
# HELP node_memory_working_set_bytes [STABLE] Current working set of the node in bytes
# TYPE node_memory_working_set_bytes gauge
node_memory_working_set_bytes 2.134016e+08 1735032832343
# HELP pod_cpu_usage_seconds_total [STABLE] Cumulative cpu time consumed by the pod in core-seconds
# TYPE pod_cpu_usage_seconds_total counter
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kindnet-ndczz"} 1.145182 1735032830497
pod_cpu_usage_seconds_total{namespace="kube-system",pod="kube-proxy-l5jhs"} 1.108676 1735032837395
pod_cpu_usage_seconds_total{namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 7.336168 1735032831254
# HELP pod_memory_working_set_bytes [STABLE] Current working set of the pod in bytes
# TYPE pod_memory_working_set_bytes gauge
pod_memory_working_set_bytes{namespace="kube-system",pod="kindnet-ndczz"} 3.3222656e+07 1735032830497
pod_memory_working_set_bytes{namespace="kube-system",pod="kube-proxy-l5jhs"} 4.0914944e+07 1735032837395
pod_memory_working_set_bytes{namespace="kube-system",pod="metrics-server-8598789fdb-nw6cq"} 4.732928e+07 1735032831254
# HELP resource_scrape_error [STABLE] 1 if there was an error while getting container metrics, 0 otherwise
# TYPE resource_scrape_error gauge
resource_scrape_error 0
As can be seen, swap stats is not shown here:
> kubectl get --raw "/api/v1/nodes/k8s-dev-worker/proxy/metrics/resource" | grep -i swap
>
What you expected to happen:
Swap to be included in metrics/resource endpoint stats.
How to reproduce it (as minimally and precisely as possible):
Bring up a cluster
Install metrics server
Run kubectl get --raw "/api/v1/nodes/<NODE-NAME>/proxy/metrics/resource" | grep -i swap
Anything else we need to know?:
Swap stats were introduced in this PR: kubernetes/kubernetes#118865.
It also shows the expected output.
FWIW you're on an oudated release, but I would not be surprised that some kernel info like this is not working properly, the "node containers" are a bit leaky, and I don't think SIG node officially supports this environment.
What's your use case? This will probably take a bit of debugging ...
FWIW you're on an oudated release, but I would not be surprised that some kernel info like this is not working properly, the "node containers" are a bit leaky, and I don't think SIG node officially supports this environment.
I can try to test it on a current release if it would be valuable
What's your use case? This will probably take a bit of debugging ...
Just a development environment, was trying to work on swap metrics.
This is not urgent to me by any means.
I suspect we'll see the same thing but ... worth a shot.
Makes sense, sorry, there hasn't been a ton of demand for metrics overall and they're not part of conformance, we have some known issues around e.g. CPU and memory reflecting the underlying host (Which is then repeated for each cluster/node), it's messy and ideally we'd require more cooperation from kubelet and/or cadvisor to mitigate.
Maybe kubelet has relevant logs?
FWIW swap support is a recent thing in Kubernetes, historically kubernetes has recommended disabling swap, and kubelet even had a hard requirement for it by default (it was possible to opt-out with a warning log instead).
EDIT: of course @iholder101 is working on the swap support, "development environment" is ambiguous for kind, though the kubernetes project itself is our first priority.
For some SIG node work, you might have better luck with hack/local-up-cluster.sh in the main Kubernetes repo. It will turn the host into a single node cluster.
What happened:
The following:
As can be seen, swap stats is not shown here:
What you expected to happen:
Swap to be included in metrics/resource endpoint stats.
How to reproduce it (as minimally and precisely as possible):
kubectl get --raw "/api/v1/nodes/<NODE-NAME>/proxy/metrics/resource" | grep -i swap
Anything else we need to know?:
Swap stats were introduced in this PR: kubernetes/kubernetes#118865.
It also shows the expected output.
Environment:
kind version
): > kind v0.22.0 go1.21.7 linux/amd64docker info
,podman info
ornerdctl info
): docker 27.2.1/etc/os-release
): Fedora 39kubectl version
): 1.32 (main)The text was updated successfully, but these errors were encountered: