You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run nebari deploy on top of a previously created cluster with an install that includes the commit ed170cb73f11df42d4d6b6536f7bea92ae1fe934 (which adds a 'count' to the efs module). Nebari destroys the entire efs module (so all of the JupyterLab data) and then recreates it
Expected behavior
New changes would be deployed but existing data would persist
OS and architecture in which you are running Nebari
Ubuntu 22.04 amd64
How to Reproduce the problem?
Deploy Nebari into AWS with any previous version up to 2024.7.1. Then install install nebari that includes this commit/this line:
This happens since adding count = var.efs_enabled ? 1 : 0 changes the module from module.efs.aws_efs_file_system.main to module.efs[0].aws_efs_file_system.main
The relevant part of the Terraform is:
[terraform]: # module.efs.aws_efs_file_system.main will be destroyed
[terraform]: # (because module.efs is not in configuration)
[terraform]: - resource "aws_efs_file_system""main" {
[terraform]: - arn = "arn:aws-us-gov:elasticfilesystem:us-gov-west-1:xxxxxxxxxxxx:file-system/fs-xxxxxxxxxxx" -> null
...
...
...
[terraform]: # module.efs[0].aws_efs_file_system.main will be created
[terraform]: + resource "aws_efs_file_system""main" {
[terraform]: + arn = (known after apply)
[terraform]: + availability_zone_id = (known after apply)
Versions and dependencies used.
Nebari version - previously deployed tag 2024.7.1. Now deployed from nebari-dev/nebari:develop HEAD at 498e569
Compute environment
None
Integrations
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered:
So terraform sees a change to the kubernetes_persistent_volume that forced replacement, but the underlying destroy PV call is failing is the PV is bound to the PVC created in the same file above. But the PVC doesn't explicitly require the name of the PV (just its storage class) so I guess Terraform can't determine that the PVC must be destroyed/replaced along with the PV if that makes sense
@kenafoster I believe this issue is the same as #2638 and should be fixed by #2639 which is already merged. Can you try out the latest develop branch and see if it's still an issue?
Update: I think I was mistaken and one more moved block is needed for AWS only
Describe the bug
Run
nebari deploy
on top of a previously created cluster with an install that includes the commited170cb73f11df42d4d6b6536f7bea92ae1fe934
(which adds a 'count' to the efs module). Nebari destroys the entire efs module (so all of the JupyterLab data) and then recreates itExpected behavior
New changes would be deployed but existing data would persist
OS and architecture in which you are running Nebari
Ubuntu 22.04 amd64
How to Reproduce the problem?
Deploy Nebari into AWS with any previous version up to 2024.7.1. Then install install
nebari
that includes this commit/this line:nebari/src/_nebari/stages/infrastructure/template/aws/main.tf
Line 67 in ed170cb
This happens since adding count = var.efs_enabled ? 1 : 0 changes the module from module.efs.aws_efs_file_system.main to module.efs[0].aws_efs_file_system.main
The fix is to create a terraform moved block .
I just noticed that some other changes included since the 2024.7.1 release have implemented this correctly
nebari/src/_nebari/stages/kubernetes_services/template/jupyterhub.tf
Line 119 in ed170cb
Command output
Versions and dependencies used.
Nebari version - previously deployed tag 2024.7.1. Now deployed from nebari-dev/nebari:develop HEAD at 498e569
Compute environment
None
Integrations
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: