-
Notifications
You must be signed in to change notification settings - Fork 16.8k
[stable/postgresql] pod not starting #9093
Comments
Having the same issue upon restart. Maintainers please help!!! |
Cannot believe that after two weeks nobody bothers to fix this... |
I am not able to reproduce the issue using Docker for OSX:
I followed the next steps: $ helm install stable/postgresql $ kubectl get pods
NAME READY STATUS RESTARTS AGE
dull-grasshopper-postgresql-0 1/1 Running 0 8m
It seems an issue related to Docker on Windows, can you try installing it without persistence?
About
you can find all the information in the issue and PR opened to migrate the previous image to this one |
Hi @carrodher,
Being able to see the actual logs on why it fails in the first place would be awesome. Logs should be visible without the overhead of sidecar injection.
|
I am trying to reproduce the error in different platforms but no luck, it seems an issue related to Docker for Windows managing ##
## Init containers parameters:
## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup
##
volumePermissions:
image:
registry: docker.io
repository: bitnami/minideb
tag: latest
pullPolicy: Always
securityContext:
runAsUser: 0
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
replication:
enabled: true
user: repl_user
password: repl_password
slaveReplicas: 1 Looking through the internet I found this issue docker/for-win#2048 that seems to be the same, but without any response |
@carrodher I don't believe this is related to Windows. I'm having this problem on GCP using GCEPersistentDisk. Using the default values, the chart installs well the first time. However, if I:
Without digging too deep yet, I imagine this has to do with the permissions but there isn't an obvious error to follow.
Basically, it's very dangerous to delete your Postgres chart as you won't be able to start it again using the same persistent disk. |
I am experiencing the same thing in EKS. If I attempt a deployment with the same name as a previously deleted release, postgres will fail to start if a persistent volume remains from the previous deployment. |
Same issue here. I even fail when I delete all previous pv and pcv. |
I've enabled debug mode of the image. I'm having issues with the ownerships
|
I changed my nfs setting to |
Hi @bholzer
According to what you say it should be related with previous PVC that were not removed when deleting the Helm chart and do not have the right permissions. Could you please check the existing PVCs after deleting the previous chart? E.g. (chart named "my-release")
You can remove the previous PVCs by running:
After that you should be able to install PostgreSQL without issues. |
@alexandruast it sounds like others are having problems with pre-existing PVC in cloud providers but no one has addressed your original issue where Postgres won't start with persistence enabled on Docker for Windows (D4W). I've been seeing the same failure and while I don't have a full answer, here is some extra information that might help and a possible workaround. (Sorry for the long post. This is in part for my own peace of mind to get everything I've found written down in one place)
There is a related open ticket on the bitnami repo this chart is based on: bitnami/bitnami-docker-postgresql#91. In that ticket @juan131 mentioned nami logs to >kubectl exec vocal-dachshund-postgresql-0 cat /opt/bitnami/postgresql/logs/postgresql.log
2019-01-04 20:49:01.744 GMT [86] FATAL: data directory "/opt/bitnami/postgresql/data" has wrong ownership
2019-01-04 20:49:01.744 GMT [86] HINT: The server must be started by the user that owns the data directory. I don't know how to use a sidecar to read these logs, but running that command should help identify the root cause of your problem. I'm betting it's the same failure as mine as we are both using Docker for Windows. Docker for Windows has a known limitation for its hostpath storage class. By default D4W mounts volumes to That issue also mentions the same error I was seeing You can follow his instructions or create these Kubernetes objects to make a volume that should work: kind: PersistentVolume
apiVersion: v1
metadata:
name: pgdata
labels:
type: local
spec:
storageClassName: hostpath
capacity:
storage: 8Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pgdata
spec:
storageClassName: hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi Once the PV and PVC exist, you should be able to install this chart with the command: ... The catch is that because doing this mounts the volume to the Linux VM /tmp/ directory, it will stick around until the Linux VM is blown away. So all future deployments will use the same mounted volume which can cause problems with permissions etc. D4W intentionally locks down the Linux VM so you can't SSH into it. I've found a roundabout way to gain access to it and clear the /tmp/ directory manually but it's not pretty. I can provide the method if anyone thinks it would be useful. Otherwise, you can delete the Moby Linux VM and have D4W recreate it at startup. |
Hi @aerotog Thanks so much for all the details you shared. It might be useful to create a troubleshooting guide with these and other solutions to workaround known issues on D4W. What do you think @javsalgar ? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
This issue is being automatically closed due to inactivity. |
Is this a request for help?:
YES
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:05:37Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Which chart:
stable/postgresql
DEPLOYED postgresql-2.3.1 10.5.0 default
What happened:
CrashLoopBackOff on start
Also, there is no straightforward way to see other logs other than this.
Hints on how to inject a sidecar for debugging purposes would also be welcomed.
What you expected to happen:
Pod running and serving connections
How to reproduce it (as minimally and precisely as possible):
helm install stable/postgresql
Anything else we need to know:
Running on Docker for Windows Version 18.06.1-ce-win73 (19507)
Why this chart is based on bitnami image and not the official postgres image is something that I would appreciate clarifying.
The text was updated successfully, but these errors were encountered: