Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scaleway: Adding multiple volumes doesn't work #9417

Closed
tboerger opened this issue Oct 17, 2016 · 8 comments · Fixed by #9493
Closed

Scaleway: Adding multiple volumes doesn't work #9417

tboerger opened this issue Oct 17, 2016 · 8 comments · Fixed by #9493

Comments

@tboerger
Copy link
Contributor

Terraform Version

0.7.6

Affected Resource(s)

  • scaleway_server
  • scaleway_volume
  • scaleway_volume_attachment

Terraform Configuration Files

variable "access" {
  type = "string"
}

variable "org" {
  type = "string"
}

provider "scaleway" {
  access_key   = "${var.access}"
  organization = "${var.org}"
}

resource "scaleway_server" "store" {
  name  = "store"
  image = "75c28f52-6c64-40fc-bb31-f53ca9d02de9"
  type  = "C2S"
}

resource "scaleway_volume" "store" {
  count      = "3"
  name       = "vol-${count.index}"
  size_in_gb = "150"
  type       = "l_ssd"
}

resource "scaleway_volume_attachment" "store" {
  count  = "3"
  volume = "${element(scaleway_volume.store.*.id, count.index)}"
  server = "${element(scaleway_server.store.*.id, 0)}"
}

Debug Output

https://gist.github.com/tboerger/c1e36a1ab2d53cfc072c6a9919bcbc66

Panic Output

n/a

Expected Behavior

I would expect to have a running server that got 3 additional volumes assigned.

Actual Behavior

The setup simply fails because one volume attachment invokes a reboot of the server while the next volume attachment fails because the server is not running anymore:

2 error(s) occurred:

* scaleway_volume_attachment.store.2: StatusCode: 400, Type: invalid_request_error, APIMessage: server is being stopped or rebooted
* scaleway_volume_attachment.store.0: StatusCode: 400, Type: invalid_request_error, APIMessage: server is being stopped or rebooted

After the failed action you can see on the Scaleway web UI that only one volume have been attached to the server:

bildschirmfoto 2016-10-17 um 23 31 16

### Steps to Reproduce 1. Get an `access_key` and the `organization` from your Scaleway account 2. Simply execute `terraform apply` and wait ### Important Factoids

I tried to launch multiple nodes with multiple volumes at once, but I have slimmed down the example to keep it simple. Maybe it would already help to assign volumes while node creation, the API supports that.

References

Maybe these issues are somehow related or maybe a PR solves the issue partially:

/cc @nicolai86

@nicolai86
Copy link
Contributor

Hey @tboerger

thanks for the bug report!

I think the server state handling inside scaleway_volume_attachment is the problem.
Scaleway only allows to attach volumes to a stopped server, meaning the resource needs to stop the server, attach the volume, and start it up again.

As a test for this hypothesis you could add state = "stopped" to the scaleway_server resource. If it works with a stopped server we'll have to make the state handling more robust.

Can you verify that?

@tboerger
Copy link
Contributor Author

I thought that can really help, but that results in an server error: https://gist.github.com/tboerger/ec91591d98335f69187ba89cbdb49eb2

--- test.tf.orig    2016-10-18 07:51:34.000000000 +0200
+++ test.tf 2016-10-18 07:43:51.000000000 +0200
@@ -17,6 +17,7 @@
   name  = "store"
   image = "75c28f52-6c64-40fc-bb31-f53ca9d02de9"
   type  = "C2S"
+  state = "stopped"
 }

 resource "scaleway_volume" "store" {

@tboerger
Copy link
Contributor Author

Maybe a solution can be to attach volumes directly to the server, the API accepts that (at least for creation). That can also solve #9254 and seems to be added within #8678, but than there is still the issue to add volumes to already created servers.

@nicolai86
Copy link
Contributor

I think supporting volumes inline would work, and to work around the VC1M issue you referenced it's probably required.

The multiple volume attachment should be fixed in any case. Luckily adding some testcases should break this reliably, and then we can fix it.

@tboerger
Copy link
Contributor Author

I really would like to provide a fix but I don't have enough internal knowledge of terraform to get an idea how to solve that issue with attaching volumes.

@nicolai86
Copy link
Contributor

@tboerger not a nice solution, but if you're interested in the details check out #9493

@tboerger
Copy link
Contributor Author

Getting solved via #9493

stack72 pushed a commit that referenced this issue Oct 27, 2016
* provider/scaleway: fix scaleway_volume_attachment with count > 1

since scaleway requires servers to be powered off to attach volumes to, we need
to make sure that we don't power down a server twice, or power up a server while
it's supposed to be modified.

sadly terraform doesn't seem to sport serialization primitives for usecases like
this, but putting the code in question behind a `sync.Mutex` does the trick, too

fixes #9417

* provider/scaleway: use mutexkv to lock per-resource

following  @dcharbonnier  suggestion. thanks!

* provider/scaleway: cleanup waitForServerState signature

* provider/scaleway: store serverID in var

* provider/scaleway: correct imports

* provider/scaleway: increase timeouts
antonbabenko pushed a commit to antonbabenko/terraform that referenced this issue Oct 27, 2016
…hicorp#9493)

* provider/scaleway: fix scaleway_volume_attachment with count > 1

since scaleway requires servers to be powered off to attach volumes to, we need
to make sure that we don't power down a server twice, or power up a server while
it's supposed to be modified.

sadly terraform doesn't seem to sport serialization primitives for usecases like
this, but putting the code in question behind a `sync.Mutex` does the trick, too

fixes hashicorp#9417

* provider/scaleway: use mutexkv to lock per-resource

following  @dcharbonnier  suggestion. thanks!

* provider/scaleway: cleanup waitForServerState signature

* provider/scaleway: store serverID in var

* provider/scaleway: correct imports

* provider/scaleway: increase timeouts
mathieuherbert pushed a commit to mathieuherbert/terraform that referenced this issue Oct 30, 2016
…hicorp#9493)

* provider/scaleway: fix scaleway_volume_attachment with count > 1

since scaleway requires servers to be powered off to attach volumes to, we need
to make sure that we don't power down a server twice, or power up a server while
it's supposed to be modified.

sadly terraform doesn't seem to sport serialization primitives for usecases like
this, but putting the code in question behind a `sync.Mutex` does the trick, too

fixes hashicorp#9417

* provider/scaleway: use mutexkv to lock per-resource

following  @dcharbonnier  suggestion. thanks!

* provider/scaleway: cleanup waitForServerState signature

* provider/scaleway: store serverID in var

* provider/scaleway: correct imports

* provider/scaleway: increase timeouts
gusmat pushed a commit to gusmat/terraform that referenced this issue Dec 6, 2016
…hicorp#9493)

* provider/scaleway: fix scaleway_volume_attachment with count > 1

since scaleway requires servers to be powered off to attach volumes to, we need
to make sure that we don't power down a server twice, or power up a server while
it's supposed to be modified.

sadly terraform doesn't seem to sport serialization primitives for usecases like
this, but putting the code in question behind a `sync.Mutex` does the trick, too

fixes hashicorp#9417

* provider/scaleway: use mutexkv to lock per-resource

following  @dcharbonnier  suggestion. thanks!

* provider/scaleway: cleanup waitForServerState signature

* provider/scaleway: store serverID in var

* provider/scaleway: correct imports

* provider/scaleway: increase timeouts
@ghost
Copy link

ghost commented Apr 21, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 21, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants