Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consul_key_subtree resource #5211

Closed
apparentlymart opened this issue Feb 19, 2016 · 2 comments
Closed

consul_key_subtree resource #5211

apparentlymart opened this issue Feb 19, 2016 · 2 comments

Comments

@apparentlymart
Copy link
Contributor

The consul_keys resource is useful for managing isolated keys within the key/value store, but it's not able to deal with situations where an entire group of keys within a particular subtree of the store must be managed.

In particular, it is not possible to direct Terraform to take ownership of a particular key prefix and ensure not only that particular keys are present but also that no extra keys are present that are not specified within Terraform. (This is an offshoot of #5210)

This is a proposal for a new resource consul_key_subtree that aims to address this use-case. The expectation is that a particular sub-tree of the store "belongs" to a given system, and that system's Terraform config completely manages that sub-tree, with no other systems allowed to write to it.

resource "consul_key_subtree" "example" {
    // These have the same meaning as for consul_keys
    datacenter = "..."
    token = "..."

    // Define the common prefix for the keyspace managed by this resource
    path_prefix = "config/example"

    sub_keys = {
        "database/endpoint" = "${aws_db_instance.main.endpoint}"
        "database/username" = "${aws_db_instance.main.username}"
        "tls_cert_id" = "${aws_iam_server_certificate.main.id}"
    }
}

The above configuration would ensure that the following keys exist in the key/value store with the given values:

  • config/example/database/endpoint
  • config/example/database/username
  • config/example/tls_cert_id

However, it would also ensure that there are no other keys in the store that live under the config/example/ folder, producing deletion diffs for them if they are detected.

Using a mapping for the sub-keys here, as opposed to the set of resources used by consul_keys, means that planning should produce more intuitive diffs that show changes directly as changes, rather than as remove/add pairs:

  sub_keys.#:                                "3" => "3"
  sub_keys.config/example/database/endpoint: "examplesomething.amazonrds.com" => "examplesomethingelse.amazonrds.com"

A notable drawback of the above design is that it still supports only scalar keys, and not any sort of computed list or map.

It's pretty common to make a folder in Consul containing a list (or set) of items that are then iterated over using consul-template. For example:

  • config/example/backend_servers/0 = 10.1.2.1:8200
  • config/example/backend_servers/1 = 10.1.2.5:9000
  • config/example/backend_servers/2 = 10.1.2.15:8200

(Doing this with server addresses might be a bit of a stretch since you could use the service registry for this, but this was just the first simple example that came quickly to my mind. A more complex example we have at work is a list of complex configuration structures representing monitoring checks for a tool like Nagios or Sensu.)

The above could be achieved by specifically enumerating each item, but it would be more convenient if there were a way to use "globbing" to automatically produce a list of keys from Terraform resources. This isn't possible as proposed above because it's not currently possible to "glob" into a map.

My first inclination is to say that some sort of comprehension of lists into maps is something Terraform should support in future and to specifically not attempt to address it within the design of this resource, but I'm open to other suggestions!

@apparentlymart
Copy link
Contributor Author

Closing this in favor of #5988.

@ghost
Copy link

ghost commented Apr 26, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 26, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

1 participant