Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attached disk prevents Instance from being replaced #73

Open
synhershko opened this issue Dec 25, 2018 · 5 comments · Fixed by #2586
Open

Attached disk prevents Instance from being replaced #73

synhershko opened this issue Dec 25, 2018 · 5 comments · Fixed by #2586
Assignees
Labels
kind/enhancement Improvements or new features resolution/fixed This issue was fixed

Comments

@synhershko
Copy link

The following program launches an instance with an attached disk:

let i = 0;
    let d = new gcp.compute.Disk(runName + "-esdata" + i, {size: dataDiskSize, type: "pd-ssd", zone});
    elasticsearchInstances.push(
        new gcp.compute.Instance(runName + "-elasticsearch-data" + i, {
                machineType: "n1-standard-1",
                zone,
                metadata: {"ssh-keys": sshKey},
                metadataStartupScript: esDataNodeStartupScript,
                bootDisk: {initializeParams: {image: machineImage}},
                attachedDisks: [{source: d}],
                networkInterfaces: [{
                    network: computeNetwork.id,
                    accessConfigs: [{}],
                }],
                scheduling: {automaticRestart: false, preemptible: isPreemptible},
                serviceAccount: {
                    scopes: ["https://www.googleapis.com/auth/cloud-platform", "compute-rw"],
                },
                tags: [clusterName, runName],
            }
        )
    );

Running pulumi up completes fine, but updating some instance parameters (e.g. startup script) requires replacing the machine when running pulumi update. However, the following error is received:

error: Plan apply failed: Error creating instance: googleapi: Error 400: The disk resource 'esdata0-86abfa9' is already being used by 'elasticsearch-data0-5df5543', resourceInUseByAnotherResource

We should be able to replace an instance by detaching the disk from existing instance, and attaching it to the new launched instance.

@leezen leezen added the kind/bug Some behavior is incorrect or out of spec label Feb 25, 2020
@cowwoc
Copy link

cowwoc commented Sep 13, 2020

2 years later, any progress on this? Alternatively, is there a workaround?

@leezen
Copy link
Contributor

leezen commented Sep 24, 2020

In this particular case, does deleteBeforeReplace work as a workaround? Right now, what's happening is upon replacement (create new then delete old), the new instance attempts to attach to the disk, which is already in use. By using the deleteBeforeReplace option, the instance should be deleted first, which should allow the disk to be attached to the new replacement.

@mikhailshilkov mikhailshilkov added resolution/by-design This issue won't be fixed because the functionality is working as designed kind/enhancement Improvements or new features and removed kind/bug Some behavior is incorrect or out of spec resolution/by-design This issue won't be fixed because the functionality is working as designed labels Sep 27, 2023
@wpietri
Copy link

wpietri commented Apr 5, 2024

It's wild to me that a simple use case like this has not worked for more than 5 years. I'm just trying out Pulumi and it's making me question whether it's the right choice for us.

@zbuchheit
Copy link

deleteBeforeReplace was able to resolve this for me, but interestingly enough this behavior didn't seem to happen when using yaml. I only witnessed it while using typescript. I haven't tested in another besides typescript and yaml.

@pulumi-bot
Copy link
Contributor

This issue has been addressed in PR #2586 and shipped in release v8.8.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement Improvements or new features resolution/fixed This issue was fixed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants