Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Netwrok Load Balancer cant configure HTTPS health check protocol #3054

Closed
subratkprusty opened this issue Jan 18, 2018 · 6 comments
Closed
Labels
bug Addresses a defect in current functionality. service/elbv2 Issues and PRs that pertain to the elbv2 service. stale Old or inactive issues managed by automation, if no further action taken these will get closed.

Comments

@subratkprusty
Copy link

subratkprusty commented Jan 18, 2018

While creating AWS NLB target group, can not configure health check in HTTPS protocol. The same can be configured in AWS portal.

resource "aws_lb_target_group" "main" {
name = "${format("%s-%s-%s-nlb-target-group", var.co_name, var.app_name, var.env)}"
port = 443
protocol = "TCP"
target_type = "ip"
vpc_id = "${var.vpc_id}"
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 10
port = "443"
protocol = "HTTPS"
path = "/index.html"
interval = 30
matcher = "200"
}

O/P-

Error: Error applying plan:

1 error(s) occurred:

Error: Error applying plan:
1 error(s) occurred:

  • module.nw_lb.aws_lb_target_group.main: 1 error(s) occurred:

  • aws_lb_target_group.main: Error modifying Target Group: InvalidConfigurationRequest: You cannot change the health check protocol for a target group with the TCP protocol

  • module.nw_lb.aws_lb_target_group.main: 1 error(s) occurred:
    status code: 400, request id: 4571ee57-fc5a-11e7-96d7-e56a883382e9

@radeksimko radeksimko added bug Addresses a defect in current functionality. service/elbv2 Issues and PRs that pertain to the elbv2 service. labels Jan 18, 2018
@radeksimko
Copy link
Member

Hi @subratkprusty
I believe this is duplicate of #2708 which was fixed in #2906 and released in 1.7.0 last Friday. Have you tried upgrading to that version of provider via terraform init -upgrade?

@radeksimko radeksimko added the waiting-response Maintainers are waiting on response from community or contributor. label Jan 18, 2018
@subratkprusty
Copy link
Author

subratkprusty commented Jan 19, 2018

Upgrading provider did not help.

subratprusty@xxxxx$ terraform init -upgrade
Upgrading modules...
XXXXXXXXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXXXXXXXXXXX
Initializing the backend...

Initializing provider plugins...

  • Checking for available provider plugins on https://releases.hashicorp.com...
  • Downloading plugin for provider "random" (1.1.0)...
  • Downloading plugin for provider "aws" (1.7.0)...
  • Downloading plugin for provider "archive" (1.0.0)...
  • Downloading plugin for provider "template" (1.0.0)...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

  • provider.archive: version = "~> 1.0"
  • provider.random: version = "~> 1.1"
  • provider.template: version = "~> 1.0"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
O/P
Error: Error applying plan:

1 error(s) occurred:

  • module.nw_lb.aws_lb_target_group.main: 1 error(s) occurred:

  • aws_lb_target_group.main: Error modifying Target Group: InvalidConfigurationRequest: You cannot change the health check protocol for a target group with the TCP protocol

@radeksimko radeksimko removed the waiting-response Maintainers are waiting on response from community or contributor. label Jan 19, 2018
@kaushikreddi9
Copy link

I am using provider.aws: version = "~> 2.23", and I am still facing this issue, Let me know how the issue can be resolved or any workaround.

@kaushikreddi9
Copy link

I have fixed the issue, by setting below values,
timeout = 10
matcher = "200-399"
interval = 30

@github-actions
Copy link

Marking this issue as stale due to inactivity. This helps our maintainers find and focus on the active issues. If this issue receives no comments in the next 30 days it will automatically be closed. Maintainers can also remove the stale label.

If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thank you!

@github-actions github-actions bot added the stale Old or inactive issues managed by automation, if no further action taken these will get closed. label Sep 15, 2021
@github-actions
Copy link

github-actions bot commented Jun 1, 2022

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jun 1, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/elbv2 Issues and PRs that pertain to the elbv2 service. stale Old or inactive issues managed by automation, if no further action taken these will get closed.
Projects
None yet
Development

No branches or pull requests

3 participants