-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes - Redeploying not working when using S3 as default storage backend #590
Comments
Same problem! |
exactly the same problem even with manual yaml manifest (not helm) and persistentvolumeclaims.
|
Just out of curiosity what do you use as storage backend? |
hi, I use azure file storage but I've to mount it with specific mount options (uid 33 for www-data) in my persistentVolume manifest otherwise it doesn't work mountOptions:
- dir_mode=0770
- file_mode=0770
- uid=33
- gid=33 actually I battle with
|
Anybody figure out a fix to this problem? I have all data persisted on a NAS and just wiped my Kubernetes host to re-start my containers from scratch. When I launch NExtCloud, I get the same "It looks like you are trying to reinstall your Nextcloud. However the file CAN_INSTALL is missing from your config directory. Please create the file CAN_INSTALL in your config folder to continue." error. I can not find any documentation on this nor many other threads. |
@GoingOffRoading You need to make sure that |
This seems to be a bug when you use S3 as primary storage. We don't want to have any persistent storages at all but use S3, only. |
@robertoschwald just get the file out of the config folder and put it into your helm chart as well? Or use an init container that creates that file from something you stored in S3... |
For right now, you still need a persistent volume for the config directory as well, even when using S3. That's been my experience, at least. You can set persistence in the helm chart, but we probably still need to separate out the config dir persistence entirely from the data dir. I'll see if I can find the other issue mentioning this and link it back here. |
you don't configmap works as well at least last time I checked the files within the config folder weren't dynamically updated at runtime by the application itself... And alternatively an init container could just bootstrap the config directory using a script or something... |
I haven't tested this in about 6 months, but I thought there was something that changed in the config directory that prevented this from working. I can't remember what it was though. Oh, maybe it was the Either way, I haven't had time to test this again in a while, so I'm open to anyone else in the community testing installing the latest version of this helm chart, enabling s3 as the default storage backend via a the |
This is still broken, I've installed the latest version and hit this after a few redeploys of the pod. |
@jessebot I stopped using nextcloud years ago because of this and another S3 backend related issue. I just were still subscribed to this issue... |
I haven't had a chance to test this again because I was waiting for the following to be merged:
In the meantime, @wrenix have you used s3 as a primary object store and done a restore successfully yet? I plan on testing this again soonish, but not before the above are merged. @provokateurin, @joshtrichards not sure if either of you use s3 either? 🤔 |
Maybe installed version also needs to be persisted? Not just
|
@jessebot sorry i do not use S3 current in my setup and has no time to build a testsetup with S3 |
So I think you, @WladyX, and @kquinsland are onto something with the So in the docker-entrypoint.sh script, we're looking for installed_version="0.0.0.0"
if [ -f /var/www/html/version.php ]; then
# shellcheck disable=SC2016
installed_version="$(php -r 'require "/var/www/html/version.php"; echo implode(".", $OC_Version);')"
fi Which as @WladyX pointed out, then later down hits this conditional: if [ "$installed_version" = "0.0.0.0" ]; then
echo "New nextcloud instance" I checked on my instance and <?php
$OC_Version = array(29,0,7,1);
$OC_VersionString = '29.0.7';
$OC_Edition = '';
$OC_Channel = 'stable';
$OC_VersionCanBeUpgradedFrom = array (
'nextcloud' =>
array (
'28.0' => true,
'29.0' => true,
),
'owncloud' =>
array (
'10.13' => true,
),
);
$OC_Build = '2024-09-12T12:35:46+00:00 873a4d0e1db10a5ae0e50133c7ef39e00750015b';
$vendor = 'nextcloud'; The issue is that I'm not sure how to persist that file, without just using our normal PVC setup, since it's not created by nextcloud/helm or nextcloud/docker. I think it's created by nextcloud/server 🤔 Perhaps we can do some sort of check to see if s3 is already enabled? 🤔 Maybe checking if |
The file is part of the source code and not generated at runtime. See https://github.com/nextcloud/server/blob/master/version.php |
So then the question is: is there a way to accommodate not having to manage PVCs while using S3? 🤔 Could we maybe add some sort of configmap with a simple php script like: <?php
$S3_INSTALLED = true; and then we tweak docker-entrypoint.sh upstream in nextcloud/docker to check there? I'm just throwing out suggestions, as I haven't tested anything on a live system yet, but want to try and help. |
Just thinking outloud:
Or I think I saw the version also in the DB, maybe docker-entrypoint should check the DB instead of the config for the version to decide if nextcloud was installed or not. |
I face this problems. How to solve this Step
nextcloud:
existingSecret:
enabled: true
secretName: nextcloud-secret
usernameKey: nextcloud-username
passwordKey: nextcloud-password
objectStore:
s3:
enabled: true
accessKey: "xxxxx"
secretKey: "xxxxx"
region: xxxxx
bucket: "xxxxx"
replicaCount: 1
internalDatabase:
enabled: false
externalDatabase:
enabled: true
existingSecret:
enabled: true
secretName: nextcloud-secret
hostKey: externaldb-host
databaseKey: externaldb-database
usernameKey: externaldb-username
passwordKey: externaldb-password
mariadb:
enabled: true
auth:
rootPassword: test |
Just some Sunday afternoon thoughts... What problem are we actually trying to solve here? If the aim is to eliminate persistent storage, that's not feasible at this juncture. That's a much larger discussion (that touches on a re-design of the image and/or Nextcloud Server itself). I guess OP didn't have any persistent storage12 for But you definitely need to have ContextThe version check in the entry point is used by the image to determine if there is already a version of Server installed on the container's persistent storage then:
The key here is that Server doesn't technically run from the image itself. The image installs a version of Server on persistent storage (i.e. the contents of This is due to a mixture of how Nextcloud Server functions historically + how the image currently functions. But the bottom line is:
So If there are challenges like nextcloud/docker#1006, those need to be tackled directly. The OP in that one may have hit a weird NFS issue or similar. In part that's why I recently did nextcloud/docker#2311 (increasing verbosity of the Footnotes |
Nextcloud version (eg, 12.0.2): 16.0.4
Operating system and version (eg, Ubuntu 17.04): Kubernetes / Docker
Apache or nginx version (eg, Apache 2.4.25): Docker Image nextcloud:16.0.4-apache
PHP version (eg, 7.1): Docker Image nextcloud:16.0.4-apache
The issue you are facing:
We launch Nextcloud the first time and it creates the DB correctly, creates the first user correctly, and starts up as expected. We can login, and create / upload files.
To verify our files are secure and retrievable after a major failure, we re-deploy the Nextcloud deployment (scale to 0, scale to 1).
After this, while starting the logs show the following:
This is the first issue. WHY does it try to re-install? The Database is still there, and so is the previous user. Why does it not just connect and re-use what is there?
After a couple of minutes the container dies and starts again, this time without failure. BUT, when trying to browse to nextcloud, we are greeted with the following message:
If I create the CAN_INSTALL file, I am prompted with the installation/setup screen and am told that the admin account I want to use does already exist.
Is this the first time you've seen this error? (Y/N): Y
The output of your config.php file in
/path/to/nextcloud
(make sure you remove any identifiable information!):Any idea on how to solve this issue?
The text was updated successfully, but these errors were encountered: