-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot initialize nextcloud when enable persistence on kubernetes #1006
Comments
i had this issue too, the mistake i made was persisting /var/www/html which would get stuck at initializing. Persist only the data directory then it should work by that i mean if your pod ends up restarting after initial installation you will get another error message which is Username is invalid because files already exist for this user the way to get about this is to always change your nextcloud_admin_user before you restart the pod and the new user can be deleted later directly from the application. Any suggestion on how to bypass this by editing the entrypoint would be nice, because i am currently trying to figure how to do that without editing the nextcloud_admin_user everytime |
hello @johnbayo i read your response and thank you but that one is not very "automatic" because we will need human intervention anytime the pods restarts ...like nextcloud cannot works normally on kubernetes as other pods. In another hand if you do not persist custom_app and setting how do you keep these one persistent while pods restarts... |
@johnbayo if you persist only data, do the config will be persistent if the pods restarts as config is at the mountPath: /var/www/html/config |
@mamiapatrick no you cant persist the config. the config gets generated only on initialization. you have to edit the entrypoint by that let another script update your config on each pod restart. |
@johnbayo but why everytime i delete the pod, i got an error that the username already exist. The pod is delete when change some configuration |
@mamiapatrick you need to change the admin user before deleting your pod each time or another option would be to edit the entrypoint to ignore this. there might be another solution but unfortunately, i am not aware of that |
At least some light in this issue. Indeed html can't be mounted, or will be stuck, but when the installation complete the pod never comes up: i5Js@nanopim4:~/nextcloud$ kubectl logs --follow nextcloud -n nextcloud i5Js@nanopim4:~/nextcloud$ kubectl get pod -n nextcloud Any tips? |
Glad i'm not the only one that's hitting this. After some more research while drafting this post, i found an issue that i think is related: My issue / steps to reproduce:I am attempting to update to the new Every time i kill the pod, a new one comes back... which is exactly what is supposed to happen. However, when I go to the next cloud instance, I get the message I think that this has something to do with the |
I think i've figured out how to get past this:
Short version:
It's one hell of a messy work around but it seems to be working for me so far. |
Hey @kquinsland glad to read your message. Yesterday I managed to install NC on my own kubernetes kluster and I encountered a bunch of errors related to what you are saying. I deployed NC with persistency (PVC) and using an external Postgres database. First run works all as expected, setting up a liveness and readiness proof to 5 minutes, because it takes time to set up the whole environment, and if the pod restarts, I found all the problems exposed here, and here nextcloud/helm#590 I will post my values later on during the day, Right now I do not have them, but basically I have an NFS disk which is used in my PV/PVC and then I mount the whole /var/www/html/config exacty as the deployment says, except I deleted the mounting part of /var/www/html. It got stuck if not. Among other things, I spent a lot of time yesterday making it work. The only solution I found was deleting the whole DB and the whole dir mounted in the PVC to make it run from zero, which is not what I want of course. I am going to try to only replace the config dir. I could not make it work with more than one replica, I guess it is the same problem though, where all of them try to reinstall NC. |
Hi @nilbacardit26 You’ve described my pain word by word... I’m done, I think nextcloud is not ready to work with Kubernetes... |
@i5Js You are right, we basically use K8s to be able to rely in a system that can recover from errors on its own, and nowadays, that is not the case with the actual chart and entrypoint. |
Same problem here. It would be great if it worked at Kubernetes. Sad. |
Hey guys, |
Here's how the interesting part of the entrypoint.sh looks like:
And here's what I added in the values.yaml after creating the "docker-entrypoint" configMap replacing the original lines of code with the above:
Also, in values.yaml:
It takes 5 "restarts" for the |
I can confirm having the same issue with NFS v4 as the backing storage for the PVC used for Nextcloud's persistence, I recently bumped the image from 22.1.1 to 22.2.0 and I'm curious if iSCSI might be the way to go for situations like these but I'd prefer to use NFS as it's infinitely simpler to setup and get going than iSCSI when using Debian. |
nextcloud/docker#1006 recommended setting a livenessProbe and readinessProbe to deal with the slow rsync on initial launch. However, startupProbe is the recommended way to deal with this rather than making the livenessProbe and readinessProbe unnecessarily long, which increases the latency to detect failure conditions.
So I was looking at nextcloud/helm#590 (comment) and nextcloud/helm#590 (comment) in nextcloud/helm#590, and I think both @kquinsland and @WladyX are onto something. I posted some ideas and suggestions in nextcloud/helm#590 (comment) but the gist seems to be that we check The issue is that I'm not sure how to persist that file, without just using our normal PVC setup, which users don't want to use if they're already using S3, since version.php is not created by nextcloud/helm nor nextcloud/docker. I think it's created by nextcloud/server 🤔 Perhaps we can do some sort of check to see if s3 is already enabled? 🤔 Maybe checking if $OBJECTSTORE_S3_BUCKET is set in the |
The core of the matter is that some k8s users seem to be disabling persistence of E.g.
It seems in most cases this is an NFS / rsync interaction. Sometimes it is merely a performance matter (some of the examples above plus others like #1582). Sometimes it's a configuration matter (e.g. #1200) However it also seems many people have no issues, so perhaps we limit the scope to:
P.S. Redesigning the image (and/or Nextcloud Server itself) to work w/o persistent storage for its installation folder is a bigger conversation (and a longer road probably), and already covered in #340 and #2044. |
Hello i just install nextcloud in my private kubernetes cluster. If i install with no persistence, the software (pod) launch as well but anytime i tried to install it on a Persistent volume it just stuck at intializing and the pod never starts. With this i cannot persist data, config and others informations. I alors notice that even if i setup an external database. I still have as environnement variable sqllite_database
The text was updated successfully, but these errors were encountered: