Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose additional properties during cluster creation #384

Closed

Conversation

brianhealeyRMN
Copy link

Allow for sending IssueClientCertificate as part of the cluster creation request
Allow for sending the Management object as part of the cluster creation request
Allow for sending the AutoScale object as part of the cluster creation request

Allow for sending IssueClientCertificate as part of the cluster creation request
Allow for sending the Management object as part of the cluster creation request
Allow for sending the AutoScale object as part of the cluster creation request
@brianhealeyRMN brianhealeyRMN changed the title Expose additional cluster properties Expose additional properties during cluster creation Sep 3, 2017
@rosbo rosbo requested review from danawillow and rosbo September 7, 2017 20:20
@paddycarver paddycarver requested review from paddycarver and removed request for rosbo September 7, 2017 20:20
Copy link
Contributor

@danawillow danawillow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much @brianhealeyRMN for the PR! I'm excited to see these improvements to our GKE handling in Terraform.

This PR had a few test failures, mind fixing them?

$ make testacc TEST=./google TESTARGS='-run=TestAccContainerCluster_withNodePoolManagement'
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./google -v -run=TestAccContainerCluster_withNodePoolManagement -timeout 120m
=== RUN   TestAccContainerCluster_withNodePoolManagement
--- FAIL: TestAccContainerCluster_withNodePoolManagement (2.65s)
	testing.go:435: Step 0 error: Error applying: 1 error(s) occurred:

		* google_container_cluster.with_node_pool_node_management: 1 error(s) occurred:

		* google_container_cluster.with_node_pool_node_management: googleapi: Error 400: Cannot have autoUpgrade or autoRepair set while image is CONTAINER_VM., badRequest
FAIL

$ make testacc TEST=./google TESTARGS='-run=TestAccContainerCluster_withNodePoolAutoScaling'
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./google -v -run=TestAccContainerCluster_withNodePoolAutoScaling -timeout 120m
=== RUN   TestAccContainerCluster_withNodePoolAutoScaling
--- FAIL: TestAccContainerCluster_withNodePoolAutoScaling (0.02s)
	testing.go:435: Step 0 error: Configuration is invalid.

		Warnings: []string(nil)

		Errors: []string{"google_container_cluster.with_node_pool_node_auto_scaling: node_pool.0.node_config.0: invalid or unknown key: auto_scaling"}
FAIL

"issue_client_certificate": {
Type: schema.TypeBool,
Optional: true,
DiffSuppressFunc: linkDiffSuppress,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this is a boolean, it shouldn't need this particular DiffSuppressFunc

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed

@@ -66,6 +66,11 @@ func resourceContainerCluster() *schema.Resource {
Required: true,
ForceNew: true,
},
"issue_client_certificate": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the other elements here are roughly alphabetical- mind maintaining that?

Copy link
Author

@brianhealeyRMN brianhealeyRMN Sep 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the order for the schema to be alphabetical

@@ -266,6 +271,60 @@ func resourceContainerCluster() *schema.Resource {
},

"node_config": schemaNodeConfig,

"management": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

likewise (alphabetical)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Elem: &schema.Resource{
Schema: map[string]*schema.Schema{

"enabled": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider removing this boolean and using the presence of the autoscaling object instead (this is what was done in #157)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified as suggested

},
},

"auto_scaling": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be autoscaling to match what's in google_container_node_pool and the API docs (which has it as one word)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified as suggested

"client_certificate": cluster.MasterAuth.ClientCertificate,
"client_key": cluster.MasterAuth.ClientKey,
"cluster_ca_certificate": cluster.MasterAuth.ClusterCaCertificate,
"issue_client_certificate": true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of setting it to true here and potentially overriding it later, how about just set it later?

Copy link
Author

@brianhealeyRMN brianhealeyRMN Sep 16, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a problem. Default behavior is "true". This is a flag vs an actual setting so I added code so that it behaves "as a setting" by checking for the existence of the client certificate on the response

@@ -635,6 +687,7 @@ resource "google_container_cluster" "with_master_auth" {
master_auth {
username = "mr.yoda"
password = "adoy.rm"
issue_client_certificate = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this change anything from what was there before? I think if you want to test issue_client_certificate it probably makes more sense to set it to true

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True is the default behavior, so setting it to false does change, however I think the larger concern is that both cases were not being tested. I added test specifically to test true/false

@@ -66,6 +66,11 @@ func resourceContainerCluster() *schema.Resource {
Required: true,
ForceNew: true,
},
"issue_client_certificate": {
Type: schema.TypeBool,
Optional: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be ForceNew as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Ordered the schema alphabetically.
Modified the behavior of issue_client_certificate to not default then override the value
Updated integration tests to include tests with and without issue_client_certificate
Fixed the integration tests for Management and AutoScale
Updated autoscale tage to match the google schema convention.
Copy link
Author

@brianhealeyRMN brianhealeyRMN left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see the updated PR based upon your feedback

@brianhealeyRMN
Copy link
Author

Thanks @danawillow
I updated the the PR with your suggestions and fixed the integration tests. I also added additional tests to provide better coverage.

@@ -304,6 +360,12 @@ func resourceContainerClusterCreate(d *schema.ResourceData, meta interface{}) er
Password: masterAuth["password"].(string),
Username: masterAuth["username"].(string),
}

if v, ok := masterAuth["issue_client_certificate"]; ok {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: since you set a default, you can remove this if statement


// default behavior is disabled. Set to true as the cluster has it defined giving the intent to enable
// it
nodePool.Autoscaling.Enabled = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you can put this in the block on line 480

// it
nodePool.Autoscaling.Enabled = true

var minNodeCount int
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you need this var

Schema: map[string]*schema.Schema{
"min_node_count": {
Type: schema.TypeInt,
Optional: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}
}

if np.Autoscaling != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you probably also want to check that np.Autoscaling.Enabled

@davidquarles
Copy link
Contributor

Yeah, this is awesome @brianhealeyRMN! @danawillow and I were discussing version upgrades in #633, which led me here. Would it be possible to extend the nodePool.Management work to target the node pools directly, rather than (or in addition to?) the parent cluster resource, and to handle the update lifecycle as well? I can totally help, if you'd like to collaborate. I can also open a new issue and start from scratch, if this is entirely out of scope here.

@justinsb
Copy link

justinsb commented Feb 7, 2018

I would love to see the issue_client_certificate support land, that was implemented in this PR. @brianhealeyRMN do you think you might rebase or split this PR up? Or would you mind if I (or someone else, if anyone else is keen) added that support?

@danawillow
Copy link
Contributor

I'm not @brianhealeyRMN, but I think you should go for it @justinsb :)

@paddycarver
Copy link
Contributor

This PR has been inactive for a long time, with no response, so I'm going to close it out. If the author wants to carry on with it, feel free to post in this thread, and we can re-open it. If someone else is interested in picking this up, feel free to open a new PR for it.

@ghost
Copy link

ghost commented Nov 17, 2018

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Nov 17, 2018
@ghost ghost removed the waiting-response label Nov 17, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants