You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
where /media/config is a NFS mounted directory. On performing a make vendor followed by a make command I noticed that during that process a manifests/grafana-storage.yaml is created with the following content:
However there is no claim for the Prometheus PV made and as the configuration in vars.jsonnet is set to
{
...
// Persistent volume configurationenablePersistence: {
// Setting these to false, defaults to emptyDirs.prometheus:true,
grafana:true,
// If using a pre-created PV, fill in the names below. If blank, they will use the default StorageClassprometheusPV:'pv-prometheus',
grafanaPV:'pv-grafana',
// If required to use a specific storageClass, keep the PV names above blank and fill the storageClass name below.storageClass:'',
// Define the PV sizes belowprometheusSizePV:'2Gi',
grafanaSizePV:'20Gi',
},
...
}
the prometheus-k8s-0 POD is hanging and waiting for a PV to be used.
On looking up the description of the prometheus-k8s-0 POD it was more or less obvious how the claim should be named. I therefore created a prometheus-storage.yaml file in manifests and specified the content below:
After provisioning the claim the POD is now working as intended. Not sure why this storage configuration isn't automatically created as the Grafana one is. On adding this configuration to the manifests directory, it is also automatically provisioned using make deploy.
Troubleshooting
Which kind of Kubernetes cluster are you using? (Kubernetes, K3s, etc)
$ k3s --versionk3s version v1.23.6+k3s1 (418c3fa8)go version go1.17.5
Are all pods in "Running" state? If any is in CrashLoopback or Error, check it's logs.
prometheus-k8s-0 is hanging and waiting for a PVC request to claim the storage it needs
You cluster already works with other applications that have HTTP/HTTPS? If not, first deploy an example NGINX and test it's access thru the created URL.
Grafana was accessible, though no data was provided from Prometheus as it wasn't able to initialize properly
If you enabled persistence, do your cluster already provides persistent storage (PVs) to other applications?
yes, configuration pasted above
Does it provides dynamic storage thru StorageClass?
no
If you deployed the monitoring stack and some targets are not available or showing no metrics in Grafana, make sure you don't have IPTables rules or use a firewall on your nodes before deploying Kubernetes.
No active UFW for now and NFS is also working fine
Customizations
Did you customize vars.jsonnet? Put the contents below:
Relevant portion was already posted above
Did you change any other file? Put the contents below:
I had to add the prometheus-storage.yaml file with above-mentioned content to get it working
The text was updated successfully, but these errors were encountered:
I've configured PV for both Prometheus and Grafana:
where
/media/config
is a NFS mounted directory. On performing amake vendor
followed by amake
command I noticed that during that process amanifests/grafana-storage.yaml
is created with the following content:I see that after a
make deploy
the PV for Grafana is in stateBound
However there is no claim for the Prometheus PV made and as the configuration in
vars.jsonnet
is set tothe
prometheus-k8s-0
POD is hanging and waiting for a PV to be used.On looking up the description of the
prometheus-k8s-0
POD it was more or less obvious how the claim should be named. I therefore created aprometheus-storage.yaml
file inmanifests
and specified the content below:After provisioning the claim the POD is now working as intended. Not sure why this storage configuration isn't automatically created as the Grafana one is. On adding this configuration to the manifests directory, it is also automatically provisioned using
make deploy
.Troubleshooting
prometheus-k8s-0 is hanging and waiting for a PVC request to claim the storage it needs
Grafana was accessible, though no data was provided from Prometheus as it wasn't able to initialize properly
yes, configuration pasted above
no
No active UFW for now and NFS is also working fine
Customizations
vars.jsonnet
? Put the contents below:Relevant portion was already posted above
I had to add the
prometheus-storage.yaml
file with above-mentioned content to get it workingThe text was updated successfully, but these errors were encountered: