-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaleway: Adding multiple volumes doesn't work #9417
Comments
Hey @tboerger thanks for the bug report! I think the server state handling inside As a test for this hypothesis you could add Can you verify that? |
I thought that can really help, but that results in an server error: https://gist.github.com/tboerger/ec91591d98335f69187ba89cbdb49eb2 --- test.tf.orig 2016-10-18 07:51:34.000000000 +0200
+++ test.tf 2016-10-18 07:43:51.000000000 +0200
@@ -17,6 +17,7 @@
name = "store"
image = "75c28f52-6c64-40fc-bb31-f53ca9d02de9"
type = "C2S"
+ state = "stopped"
}
resource "scaleway_volume" "store" { |
I think supporting volumes inline would work, and to work around the VC1M issue you referenced it's probably required. The multiple volume attachment should be fixed in any case. Luckily adding some testcases should break this reliably, and then we can fix it. |
I really would like to provide a fix but I don't have enough internal knowledge of terraform to get an idea how to solve that issue with attaching volumes. |
Getting solved via #9493 |
* provider/scaleway: fix scaleway_volume_attachment with count > 1 since scaleway requires servers to be powered off to attach volumes to, we need to make sure that we don't power down a server twice, or power up a server while it's supposed to be modified. sadly terraform doesn't seem to sport serialization primitives for usecases like this, but putting the code in question behind a `sync.Mutex` does the trick, too fixes #9417 * provider/scaleway: use mutexkv to lock per-resource following @dcharbonnier suggestion. thanks! * provider/scaleway: cleanup waitForServerState signature * provider/scaleway: store serverID in var * provider/scaleway: correct imports * provider/scaleway: increase timeouts
…hicorp#9493) * provider/scaleway: fix scaleway_volume_attachment with count > 1 since scaleway requires servers to be powered off to attach volumes to, we need to make sure that we don't power down a server twice, or power up a server while it's supposed to be modified. sadly terraform doesn't seem to sport serialization primitives for usecases like this, but putting the code in question behind a `sync.Mutex` does the trick, too fixes hashicorp#9417 * provider/scaleway: use mutexkv to lock per-resource following @dcharbonnier suggestion. thanks! * provider/scaleway: cleanup waitForServerState signature * provider/scaleway: store serverID in var * provider/scaleway: correct imports * provider/scaleway: increase timeouts
…hicorp#9493) * provider/scaleway: fix scaleway_volume_attachment with count > 1 since scaleway requires servers to be powered off to attach volumes to, we need to make sure that we don't power down a server twice, or power up a server while it's supposed to be modified. sadly terraform doesn't seem to sport serialization primitives for usecases like this, but putting the code in question behind a `sync.Mutex` does the trick, too fixes hashicorp#9417 * provider/scaleway: use mutexkv to lock per-resource following @dcharbonnier suggestion. thanks! * provider/scaleway: cleanup waitForServerState signature * provider/scaleway: store serverID in var * provider/scaleway: correct imports * provider/scaleway: increase timeouts
…hicorp#9493) * provider/scaleway: fix scaleway_volume_attachment with count > 1 since scaleway requires servers to be powered off to attach volumes to, we need to make sure that we don't power down a server twice, or power up a server while it's supposed to be modified. sadly terraform doesn't seem to sport serialization primitives for usecases like this, but putting the code in question behind a `sync.Mutex` does the trick, too fixes hashicorp#9417 * provider/scaleway: use mutexkv to lock per-resource following @dcharbonnier suggestion. thanks! * provider/scaleway: cleanup waitForServerState signature * provider/scaleway: store serverID in var * provider/scaleway: correct imports * provider/scaleway: increase timeouts
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
0.7.6
Affected Resource(s)
scaleway_server
scaleway_volume
scaleway_volume_attachment
Terraform Configuration Files
Debug Output
https://gist.github.com/tboerger/c1e36a1ab2d53cfc072c6a9919bcbc66
Panic Output
n/a
Expected Behavior
I would expect to have a running server that got 3 additional volumes assigned.
Actual Behavior
The setup simply fails because one volume attachment invokes a reboot of the server while the next volume attachment fails because the server is not running anymore:
After the failed action you can see on the Scaleway web UI that only one volume have been attached to the server:

### Steps to Reproduce 1. Get an `access_key` and the `organization` from your Scaleway account 2. Simply execute `terraform apply` and wait ### Important FactoidsI tried to launch multiple nodes with multiple volumes at once, but I have slimmed down the example to keep it simple. Maybe it would already help to assign volumes while node creation, the API supports that.
References
Maybe these issues are somehow related or maybe a PR solves the issue partially:
/cc @nicolai86
The text was updated successfully, but these errors were encountered: