Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VM Group] vms changed outside of Terraform #160

Closed
treywelsh opened this issue Sep 17, 2021 · 2 comments · Fixed by #182
Closed

[VM Group] vms changed outside of Terraform #160

treywelsh opened this issue Sep 17, 2021 · 2 comments · Fixed by #182
Labels

Comments

@treywelsh
Copy link
Collaborator

treywelsh commented Sep 17, 2021

A comment of an other issue showed us a problem on the vms attribute of the virtual machine group resource.

Terraform version: 0.15.4 and higher
The report of plan/apply commands has been enriched, see the NEW FEATURES section of the changelog
This explain why we can see more diffs with recent versions of terraform that we didn't see before

After first apply, next plan/apply will show a diff:

Note: Objects have changed outside of Terraform

Terraform detected the following changes made outside of Terraform since the
last "terraform apply":

  # module.test.module.vmgroup.opennebula_virtual_machine_group.vm-group-haproxy has been changed
  ~ resource "opennebula_virtual_machine_group" "vm-group-haproxy" {
        id          = "115"
        name        = "haproxy"
        tags        = {
            "environment" = "dev"
        }
        # (6 unchanged attributes hidden)

      ~ role {
            id                = 0
            name              = "haproxy"
          ~ vms               = [
              + 1030,
              + 1032,
            ]
            # (3 unchanged attributes hidden)
        }
    }
@treywelsh
Copy link
Collaborator Author

treywelsh commented Sep 20, 2021

I reproduced it with v0.3.0 version of the provider.

It seems that terraform is right. Here is my current understanding of the problem:

resource "opennebula_virtual_machine_group" "test" {
  name        = "test-vmgroup"
  group       = "oneadmin"
  permissions = "642"
  role {
    name   = "anti-aff"
    policy = "ANTI_AFFINED"
  }
  role {
    name         = "host-aff"
    host_affined = [0]
  }
  tags = {
    env      = "prod"
    customer = "test"
  }
}

resource "opennebula_virtual_machine" "test" {
  name        = "test-virtual_machine"
  group       = "oneadmin"
  permissions = "642"
  memory      = 128
  cpu         = 0.1

  context = {
    NETWORK      = "YES"
    SET_HOSTNAME = "$NAME"
    TEST         = "TEST1"
  }

  graphics {
    type   = "VNC"
    listen = "0.0.0.0"
    keymap = "en-us"
  }

  vmgroup {
    vmgroup_id = opennebula_virtual_machine_group.test.id
    role       = "host-aff"
  }

  os {
    arch = "x86_64"
    boot = ""
  }

  tags = {
    env      = "prod"
    customer = "test"
  }

  timeout = 5
}

The vmgroup is created first (dependency introduced by vmgroup_id = opennebula_virtual_machine_group.test.id) then read via the provider (with vms empty), then the VM is created and added to the VM group.
Now, the vms part of vmgroup is not empty on cloud provider side, but terraform is not aware of the vms content.

At next plan, terraform read again the content on cloud provider side and see the content of vms in the VM group.

The first questions that comes to mind are:

  • how could we manage inter ressources dependencies ?
    I don't know any tool in the provider SDK to manage this.
  • could we map the schemas in a other way to the provider datas ?
    It's one of the solutions, for instance we could remove the vms section of the vmgroup resource, and we could try to fill this information only per vm.
    The vms field behavior is only computed: which means the provider only read this information to return it to the user, it's not optional or required...

I'm still looking for other solutions

@Th0masL
Copy link
Contributor

Th0masL commented Sep 22, 2021

Interesting, nice finding !

And sorry for making you believe that this behavior was not happening with the version 0.3.0 of the provider. I guess I never really saw this behavior on v0.3.0 because I was always destroying and recreating VMs (due to the live Context Update not working), where I'm now updating running VMs with the new version of the Provider, that I built from the PR that add support to Context Update.

treywelsh added a commit to treywelsh/terraform-provider-opennebula that referenced this issue Nov 10, 2021
@treywelsh treywelsh linked a pull request Nov 10, 2021 that will close this issue
treywelsh added a commit to treywelsh/terraform-provider-opennebula that referenced this issue Nov 10, 2021
treywelsh added a commit to treywelsh/terraform-provider-opennebula that referenced this issue Nov 10, 2021
treywelsh added a commit to treywelsh/terraform-provider-opennebula that referenced this issue Nov 16, 2021
treywelsh added a commit to treywelsh/terraform-provider-opennebula that referenced this issue Nov 16, 2021
treywelsh added a commit that referenced this issue Nov 16, 2021
treywelsh added a commit that referenced this issue Nov 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants