-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workspace Resource Monitor Plugin #17205
Comments
cc @gorkem |
We should take care to directly communicate with the k8s API's and not go through the Theia back end for this: the back end shares resources with the rest of the pod, so if the pod gets resource-limited, you often cannot communicate with it anymore. The IDE then simply goes into "offline" mode. |
Good point. |
Percentage is good for disk space. %CPU as we usually see in system monitor and ram usage could be useful, but i would rather see
|
Let's please not feature-creep this into a duplicate of the OpenShift console. Right now, we're talking about something that fits into the status bar. |
it is not feature creep ... but feature that fits Che end users.... |
@ericwill @l0rd @fbricon
So I am not sure if it will be possible to get this information by using k8s API, but maybe I'm wrong |
@svor back-end plugins run in the same pod as the workspace and their communication with the browser is routed through the the theia back end process. They will become useless when resources are scarce. See #17205 (comment) |
For my understanding when workspace's pod doesn't have enough resources we can't do nothing in IDE even display resources. It's like in the local machine, if your processor was overloaded, you can't even open System monitor to check that. I don't see another way to solve this issue except add a plugin which will be run in workspace pod (maybe as a sidecar plugin). @tsmaeder do you have another idea? |
We can could have a theia extension that has no back-end component and connects directly to Kubernetes API's. |
I tried to use javascript kubernetes-client library to get resource information in theia plugin, but it looks like Metrics API is not implemented yet in this lib. It has only
where
for example: Node info from K8s API{
"Node":{
"metadata":{
"annotations":{
"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock",
"node.alpha.kubernetes.io/ttl":"0",
"volumes.kubernetes.io/controller-managed-attach-detach":"true"
},
"creationTimestamp":"2020-09-24T12:33:28.000Z",
"labels":{
"beta.kubernetes.io/arch":"amd64",
"beta.kubernetes.io/os":"linux",
"kubernetes.io/arch":"amd64",
"kubernetes.io/hostname":"m01",
"kubernetes.io/os":"linux",
"minikube.k8s.io/commit":"eb13446e786c9ef70cb0a9f85a633194e62396a1",
"minikube.k8s.io/name":"minikube",
"minikube.k8s.io/updated_at":"2020_09_24T15_33_31_0700",
"minikube.k8s.io/version":"v1.8.2",
"node-role.kubernetes.io/master":""
},
"name":"m01",
"resourceVersion":"611752",
"selfLink":"/api/v1/nodes/m01",
"uid":"18e91c2e-26cb-40b7-9ac3-151060f3df7f"
},
"spec":{
},
"status":{
"addresses":[
{
"address":"192.168.99.150",
"type":"InternalIP"
},
{
"address":"m01",
"type":"Hostname"
}
],
"allocatable":{
"cpu":"4",
"ephemeral-storage":"16390427417",
"hugepages-2Mi":"0",
"memory":"10125544Ki",
"pods":"110"
},
"capacity":{
"cpu":"4",
"ephemeral-storage":"17784752Ki",
"hugepages-2Mi":"0",
"memory":"10227944Ki",
"pods":"110"
},
"conditions":[
{
"lastHeartbeatTime":"2020-10-05T13:18:43.000Z",
"lastTransitionTime":"2020-09-24T12:33:25.000Z",
"message":"kubelet has sufficient memory available",
"reason":"KubeletHasSufficientMemory",
"status":"False",
"type":"MemoryPressure"
},
{
"lastHeartbeatTime":"2020-10-05T13:18:43.000Z",
"lastTransitionTime":"2020-10-02T11:11:04.000Z",
"message":"kubelet has no disk pressure",
"reason":"KubeletHasNoDiskPressure",
"status":"False",
"type":"DiskPressure"
},
{
"lastHeartbeatTime":"2020-10-05T13:18:43.000Z",
"lastTransitionTime":"2020-09-24T12:33:25.000Z",
"message":"kubelet has sufficient PID available",
"reason":"KubeletHasSufficientPID",
"status":"False",
"type":"PIDPressure"
},
{
"lastHeartbeatTime":"2020-10-05T13:18:43.000Z",
"lastTransitionTime":"2020-09-24T12:33:52.000Z",
"message":"kubelet is posting ready status",
"reason":"KubeletReady",
"status":"True",
"type":"Ready"
}
],
"daemonEndpoints":{
"kubeletEndpoint":{
"Port":10250
}
},
"images":[
{
"names":[
"quay.io/eclipse/che-keycloak@sha256:cc03221d497107ca997eccf49ee791532d43f62403fe2ef88990e4d486634d1a",
"quay.io/eclipse/che-keycloak:7.18.2"
],
"sizeBytes":1201038736
},
{
"names":[
"vsvydenko/che-theia@sha256:a9860eec38330904b211904e9beedf3318669772fced5e47f80f44bd39737710",
"vsvydenko/che-theia:next"
],
"sizeBytes":606004418
},
{
"names":[
"vsvydenko/che-theia-dev@sha256:68e4f59325f652b81ae89b8fd641d39e8c9b5376f39b1462d3f9614871e76f7f",
"vsvydenko/che-theia-dev:next"
],
"sizeBytes":600001978
},
{
"names":[
"quay.io/eclipse/che-theia@sha256:4eb52a8e21a8463098d4a769bc57a196a7553d9013cc125cf0bf255b626ef117",
"quay.io/eclipse/che-theia:7.18.2"
],
"sizeBytes":505637422
},
{
"names":[
"quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:d0b22f715fcea5598ef7f869d308b55289a3daaa12922fa52a1abf17703c88e7",
"quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1"
],
"sizeBytes":483167446
},
{
"names":[
"centos/postgresql-96-centos7@sha256:b681d78125361519180a6ac05242c296f8906c11eab7e207b5ca9a89b6344392",
"centos/postgresql-96-centos7:9.6"
],
"sizeBytes":345986118
},
{
"names":[
"quay.io/eclipse/che-server@sha256:4216b2c15f9086b1e01c2d67497c846a2fb1834f08bcc83ff94a8caf1ada8785",
"quay.io/eclipse/che-server:7.18.2"
],
"sizeBytes":317538843
},
{
"names":[
"vsvydenko/python-sidecar@sha256:7e7b368997555737353852d9e46993fce75e75639eb658ba45f342a0a4106b2f",
"vsvydenko/python-sidecar:latest"
],
"sizeBytes":305694358
},
{
"names":[
"k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646",
"k8s.gcr.io/etcd:3.4.3-0"
],
"sizeBytes":288426917
},
{
"names":[
"quay.io/eclipse/che-sidecar-java@sha256:997cfa109e8e85e2038cf765480a095dc9ee1d68ebfab50888cedffc5be393a3",
"quay.io/eclipse/che-sidecar-java:11-7bd8c8c"
],
"sizeBytes":277096417
},
{
"names":[
"quay.io/eclipse/che-operator@sha256:f32bb46276437e8c7508b441cffbd9c8fe6260aa1eeefc66f5d0616dba64d40c",
"quay.io/eclipse/che-operator:7.18.2"
],
"sizeBytes":184389275
},
{
"names":[
"k8s.gcr.io/kube-apiserver@sha256:33400ea29255bd20714b6b8092b22ebb045ae134030d6bf476bddfed9d33e900",
"k8s.gcr.io/kube-apiserver:v1.17.3"
],
"sizeBytes":170986003
},
{
"names":[
"k8s.gcr.io/kube-controller-manager@sha256:2f0bf4d08e72a1fd6327c8eca3a72ad21af3a608283423bb3c10c98e68759844",
"k8s.gcr.io/kube-controller-manager:v1.17.3"
],
"sizeBytes":160918035
},
{
"names":[
"bitnami/metrics-server@sha256:6abc18ebf7e1a7d6d2588a8a5cee9508a4f99a42fdd52324fe524fca50136ba0",
"bitnami/metrics-server:0.3.7-debian-10-r118"
],
"sizeBytes":149018537
},
{
"names":[
"registry.access.redhat.com/ubi8-minimal@sha256:372622021a90893d9e25c298e045c804388c7666f3e756cd48f75d20172d9e55",
"registry.access.redhat.com/ubi8-minimal:8.2-345"
],
"sizeBytes":141675770
},
{
"names":[
"quay.io/eclipse/che-machine-exec@sha256:1d434c46c7e5a5876556cc997e949eb1434d7bd4e8e57a93c847802695459e76",
"quay.io/eclipse/che-machine-exec:7.18.2"
],
"sizeBytes":117814856
},
{
"names":[
"k8s.gcr.io/kube-proxy@sha256:3a70e2ab8d1d623680191a1a1f1dcb0bdbfd388784b1f153d5630a7397a63fd4",
"k8s.gcr.io/kube-proxy:v1.17.3"
],
"sizeBytes":115964919
},
{
"names":[
"quay.io/eclipse/che-theia-endpoint-runtime-binary@sha256:0e6efeb58424725a8eecb0b76e59f051ff0447f5ac5f658a752981c4ec5873b9",
"quay.io/eclipse/che-theia-endpoint-runtime-binary:next"
],
"sizeBytes":100898796
},
{
"names":[
"k8s.gcr.io/kube-scheduler@sha256:b091f0db3bc61a3339fd3ba7ebb06c984c4ded32e1f2b1ef0fbdfab638e88462",
"k8s.gcr.io/kube-scheduler:v1.17.3"
],
"sizeBytes":94435859
},
{
"names":[
"kubernetesui/dashboard@sha256:fc90baec4fb62b809051a3227e71266c0427240685139bbd5673282715924ea7",
"kubernetesui/dashboard:v2.0.0-beta8"
],
"sizeBytes":90835427
},
{
"names":[
"quay.io/eclipse/che-sidecar-node@sha256:fab5f98d9546eeff68d54bd71ea6e3d91bbf21b7f6beea1ae1117da432731b61",
"quay.io/eclipse/che-sidecar-node:12-026416c"
],
"sizeBytes":89336883
},
{
"names":[
"gcr.io/k8s-minikube/storage-provisioner@sha256:088daa9fcbccf04c3f415d77d5a6360d2803922190b675cb7fc88a9d2d91985a",
"gcr.io/k8s-minikube/storage-provisioner:v1.8.1"
],
"sizeBytes":80815640
},
{
"names":[
"quay.io/eclipse/che-sidecar-node@sha256:0370b2c3d52d31c9fd52bff3edb84bd36bbcc2aff6224957d6dd94935843ee59",
"quay.io/eclipse/che-sidecar-node:10-0cb5d78"
],
"sizeBytes":76431425
},
{
"names":[
"quay.io/eclipse/che-plugin-registry@sha256:a265b42a9d8e02a2b3f6a2b26de9a2e426633b11c1979951ef920bd8ffa2a79c",
"quay.io/eclipse/che-plugin-registry:7.18.2"
],
"sizeBytes":68778916
},
{
"names":[
"quay.io/eclipse/che-devfile-registry@sha256:411138bab8a5d8f52e02e16e1506faa8e31b6ac1bc62d2ae479a342c2bcaa864",
"quay.io/eclipse/che-devfile-registry:7.18.2"
],
"sizeBytes":68543545
},
{
"names":[
"k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7",
"k8s.gcr.io/coredns:1.6.5"
],
"sizeBytes":41578211
},
{
"names":[
"kubernetesui/metrics-scraper@sha256:2026f9f7558d0f25cc6bab74ea201b4e9d5668fbc378ef64e13fddaea570efc0",
"kubernetesui/metrics-scraper:v1.0.2"
],
"sizeBytes":40101552
},
{
"names":[
"quay.io/eclipse/che-jwtproxy@sha256:881d1c91e7f5840314f25104ef5c0acee59ed484a5f9ef39daf3008725ea1033",
"quay.io/eclipse/che-jwtproxy:0.10.0"
],
"sizeBytes":17549176
},
{
"names":[
"quay.io/eclipse/che-plugin-artifacts-broker@sha256:f8591171ab5ac51e2af37ee9969781097c41a2e9f952de283e084f258411e5ca",
"quay.io/eclipse/che-plugin-artifacts-broker:v3.3.0"
],
"sizeBytes":12234442
},
{
"names":[
"quay.io/eclipse/che-plugin-metadata-broker@sha256:1154eea18f1bf3fab9dc76618ebe2f00f7fe03236f2e04bfe172de339ae10796",
"quay.io/eclipse/che-plugin-metadata-broker:v3.3.0"
],
"sizeBytes":12234442
},
{
"names":[
"k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea",
"k8s.gcr.io/pause:3.1"
],
"sizeBytes":742472
}
],
"nodeInfo":{
"architecture":"amd64",
"bootID":"4a1ccf7d-50b7-4d78-9cba-d0e7ebc5177b",
"containerRuntimeVersion":"docker://19.3.6",
"kernelVersion":"4.19.94",
"kubeProxyVersion":"v1.17.3",
"kubeletVersion":"v1.17.3",
"machineID":"75c58a17892844a1bae7df4fec22398d",
"operatingSystem":"linux",
"osImage":"Buildroot 2019.02.9",
"systemUUID":"4d2de55e-d13c-4599-aeb6-d51895745baa"
}
}
},
"CPU":{
"Capacity":4,
"RequestTotal":1.25,
"LimitTotal":0.5
},
"Memory":{
"Capacity":10368557056,
"RequestTotal":679478956,
"LimitTotal":6980897624
}
} |
Just warning that we should keep in mind CORS restrictions |
Is your enhancement related to a problem? Please describe.
If a workspace is slow users don't have any clue if that's because of the disk that is full, the disk that is not performant, a process that is getting OOM killed, the CPU that is getting throttled or just if it's an application problem.
Describe the solution you'd like
A plugin that shows the % usage of the underlying resources: CPU/Memory/Disk. As https://marketplace.visualstudio.com/items?itemName=mutantdino.resourcemonitor but using k8s API to retrieve the usage.
The text was updated successfully, but these errors were encountered: