-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A simplifed load balancer resource for AzureRM #6429
Conversation
Here is HCL that will put 2 VMs behind the load balancer and route ssh to them: |
Hi @buzztroll! I agree that the load balancer is likely to be overly complex for a single resource. In lieu of the nested configurations discussed in a previous ticket, I think this might be a good route forward to getting something usable. Let me discuss with @phinze et al on Monday before merging - first impressions of the code are good, however! Thanks for your work here! |
@andreimc and @iaingblack you showed interest in a load balancer resource for azure. Would this simple flat one meet your needs (or at least some of your needs)? |
I'll take it! For my use cases at least, a single subnet is fine. I'll give it a test as soon as I can. Reading the description, it says it can be configured "according to one load balancing rule and 1 probe". Does that mean only 1 rule can be created or can there be many? Multiple rules to allow exposing both RDP and winrm ports on a machine would be nice. But I can open all ports to a single VM from a specific IP if not.Thanks for the quick implementation, Buzztroll! |
With this implementation it is 1 rule, 1 probe, 1 frontend, 1 backend, 1 everything. It wouldnt be hard to extend this to be 1 of everything but allow many rules. I have a work in progress for a more complete load balancer but I ran into some buggy issues with the Azure API (the result of the Create does not match the result of a get preformed immediately after). If the community finds it better to have this simple implementation but with many rules I can extend it to do that fairly quickly |
Cool. If you can do many rules that would be great. Having one rule to expose specific ports to the public/client and another rule to expose winrm/rdp to the sysadmin would be the ideal implementation, at least from my perspective.The rest would be nice but that would meet my requirements at least. As it is, this is already very encouraging. Thanks again for working on it Buzztroll :) |
9f62375
to
51efe53
Compare
@iaingblack I patched it to take multiple probes and rules. Example HCL can be found here: I am good with whatever incantation is easiest for hashicorp to merge. @jen20 do you have any preferences here? |
5e68896
to
1b3d9d9
Compare
1b3d9d9
to
1bab959
Compare
I've been trying to use this, and I'm running into problems with the VMs failing to create. Here's my configuration: variable "name" { default = "dcos" }
variable "location" { default = "West US" }
variable "environment" { default = "DC/OS" }
resource "azurerm_resource_group" "dcos" {
name = "${var.name}"
location = "${var.location}"
tags {
environment = "${var.environment}"
}
}
resource "azurerm_virtual_network" "dcos" {
name = "${var.name}"
resource_group_name = "${azurerm_resource_group.dcos.name}"
address_space = ["10.0.0.0/16"]
location = "${var.location}"
tags {
environment = "${var.environment}"
}
}
resource "azurerm_subnet" "test" {
name = "public-slave"
resource_group_name = "${azurerm_resource_group.dcos.name}"
virtual_network_name = "${azurerm_virtual_network.dcos.name}"
address_prefix = "10.0.16.0/22"
}
resource "azurerm_availability_set" "test" {
name = "test"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.dcos.name}"
tags {
environment = "${var.environment}"
}
}
resource "azurerm_storage_account" "test" {
name = "8d40b744a9ebe520f861c8e0"
resource_group_name = "${azurerm_resource_group.dcos.name}"
location = "${var.location}"
account_type = "Standard_LRS"
tags {
environment = "${var.environment}"
}
}
resource "azurerm_storage_container" "test" {
name = "${var.name}-test-${count.index}-vhds"
resource_group_name = "${azurerm_resource_group.dcos.name}"
storage_account_name = "${azurerm_storage_account.test.name}"
container_access_type = "private"
count = 2
}
resource "azurerm_network_interface" "test" {
name = "${var.name}-test-${count.index}-nic"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.dcos.name}"
ip_configuration {
name = "test-${count.index}"
subnet_id = "${azurerm_subnet.test.id}"
private_ip_address_allocation = "dynamic"
load_balancer_backend_address_pools_ids = ["${azurerm_simple_lb.test.backend_pool_id}"]
}
count = 2
}
resource "azurerm_virtual_machine" "test" {
name = "${var.name}-test-${count.index}"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.dcos.name}"
network_interface_ids = ["${element(azurerm_network_interface.test.*.id, count.index)}"]
availability_set_id = "${azurerm_availability_set.test.id}"
vm_size = "Standard_A6"
storage_image_reference {
publisher = "openlogic"
offer = "CentOS"
sku = "7.2"
version = "latest"
}
storage_os_disk {
name = "os-disk"
vhd_uri = "${azurerm_storage_account.test.primary_blob_endpoint}${element(azurerm_storage_container.test.*.name, count.index)}/os-disk.vhd"
caching = "ReadWrite"
create_option = "FromImage"
}
os_profile {
computer_name = "${var.name}-test-${count.index}"
admin_username = "centos"
admin_password = "centos"
}
os_profile_linux_config {
disable_password_authentication = false
}
tags {
environment = "${var.environment}"
}
count = 2
}
resource "azurerm_public_ip" "test-lb" {
name = "${var.name}-test-lb-ip"
location = "${var.location}"
resource_group_name = "${azurerm_resource_group.dcos.name}"
public_ip_address_allocation = "static"
tags {
environment = "${var.environment}"
}
}
resource "azurerm_simple_lb" "test" {
name = "${var.name}-test"
location = "${var.location}"
type = "Microsoft.Network/loadBalancers"
resource_group_name = "${azurerm_resource_group.dcos.name}"
frontend_public_ip_address = "${azurerm_public_ip.test-lb.id}"
frontend_allocation_method = "Dynamic"
probe {
name = "HTTP"
protocol = "tcp"
port = 80
interval = 5
number_of_probes = 2
}
rule {
protocol = "tcp"
load_distribution = "Default"
frontend_port = 80
backend_port = 80
name = "HTTP"
probe_name = "HTTP"
}
probe {
name = "HTTPS"
protocol = "tcp"
port = 443
interval = 5
number_of_probes = 2
}
rule {
protocol = "tcp"
load_distribution = "Default"
frontend_port = 443
backend_port = 443
name = "HTTPS"
probe_name = "HTTPS"
}
} I'm getting the following error:
I've noticed that if I create the VMs first, then add the load balancer, things appear to work fine. |
@Bowbaq Thats for checking trying this out. The error messages reported in the azurerm provider are very limited. There is a patch out to help with there #6445. For now it would how if you could go into the azure portal and look at the audit trail for the resource group. There should be errors reported there in some very unfriendly looking json documents. |
@Bowbaq I attempted to run your tf file and got the same errors. Sadly there were no illuminating details in the azure portal. However when I doced out all of the references to the load balancer I got the same errors so I do not believe that this is related to the LB. |
@Bowbaq I debugged your hcl a bit. Change the azure_virtual_machine.storage_os_disk like so:
That worked for me. The uri needs to be unique across both VMs |
@buzztroll I'm still pretty new to azure, wouldn't the separate storage container generate a different URI? |
} | ||
ruleSet.Add(r) | ||
} | ||
d.Set("subnet", ruleSet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should be d.Set("rule", ruleSet)
@buzztroll I'm running into some issues updating an existing load balancer. Specifically, changing probes / rules seems to result in a 400 Bad Request every time. I think the problem is in the 2-step create then update process. I a probe gets removed, then during the first step, the existing rules end up pointing to a now removed probe, and it fails |
@Bowbaq thanks for the review! I fixed the things you caught. When you remove a probe do you also remove all of the rules referencing it? I think azure requires that. |
So what happened is that I had a 1 to 1 probe/rule setup. Then I modified the rules to use the same probe and removed the now useless extra probes. That's when I hit the problem above. |
Ok I see how that would cause a problem. The way that the ARM LB API works this has to be in 2 phases or the initial create will not have the right IDs. However I should be able to factor out the update. Thanks |
This patch allows a probe to be deleted if the load balancing rules are adjusted properly as well.
…thod to prevent unnecessary recreations
brought test function names into line with the rest of the azurerm package fixed typos of frontend fixed variable casing
aa14ddd
to
0660a30
Compare
We've actually noticed a draw back to this approach compared to #6467 which has the nested resources separated. We're unable to iterate through a set of objects or pass in a set of probes or rules to this resource. Having separate resources would allow us to cut down on repetition in our code. All up for discussion of course :). |
Agreed but there are other complications with that approach. For example a load balancer is invalid without a single frontend. This causes complications with updates and deletes. |
I've been testing this out, works well for simple web load balancer. Would like to see it get merged in. |
I agree with @darinhoward, I have been using this for several months without issue. It would be great to see it merged into the main line and released. How can we get movement on this? |
@buzztroll Are there any plans to get this merged? Or is this PR dead? On a side note, it is interesting the Azure Network Peering makes it into codebase while still in preview, yet a simple Load Balancer is having some much trouble. Are there any steps I may take to assist in getting this moving? |
Virtual Network Peering is a very simple resource where as Load Balancers Careful consideration is needed to develop a solution which meets as many On Thu, 29 Sep 2016, 15:39 Tom Straub, notifications@github.com wrote:
|
@pmcatominey No doubt on the complexity differences. Would it not be better to have an LB that satisfies at least some of the basic use cases rather than nothing at all? |
I would love to see it merged. Please let me know what I need to do to make On Thursday, September 29, 2016, Tom Straub notifications@github.com
|
Hi everyone, This has been a hot topic for us in Slack this week. Of the folks who worked on the Azure provider, it is a unanimous agreement that:
@jen20 in particular is heading up the conversation with a few companies that really want this feature as well as the proper Microsoft connections about some issues and questions we have. The solution is that we're prioritizing writing an Azure LB using individual separated resources rather than one big one. As for this PR, we're going to close it because it isn't best for Terraform to support something like this long term since it doesn't map directly to how the API is. We agree that LB is complex right now and we're going to work to simplify it and document that. What we recommend is the creators of this PR (or the community) extract this into a 3rd party provider. We're going to start building a web page linked from terraform.io directly to 3rd party providers so they can get some visibility instead of just hopeful Googling. I'd love for this to be a part of that. I'll follow up with @jen20 to provide any guidance here and pointers to the new work being done. Sorry for not merging this, it always feels really bad to just "reject" work that has been done, but please understand that merging this means we can effectively never un-merge it. The Terraform team would be stuck maintaining this resource for a long, long time, and we don't think this is the approach we want to take with the work we have planned. Terraform's very easy to use plugin model was made for exactly these scenarios, and is very stable (we dog food it ourselves!). Please use it to your advantage here, and stay tuned for more work on this. |
@mitchellh @jen20 I don't know who you're in contact with, but I've been watching this and wrote the LB integration for Kubernetes/Azure. I know where some of the warts are and I know people to contact about them. Please reach out if I can be of any assistance. |
@colemickens Will do, I'm wrangling a bit to make sure we get our communication right for this and to see if I can get any timelines for you. I'll make sure James sees your comment as well. |
We've started working on this in #9199 - please refer to that PR for future updates on Azure load balancing! Thanks @colemickens and all who have contributed so far. |
@mitchellh: Thank you for your comments they make sense. I am happy that the community found this useful and I was thrilled with how easy it was to make a provider plugin for terraform. I will watch #9199. I learned some tough lessons trying to tease the Azure LB into separate resources so hopefully I can be helpful there. If there is further interest in the simple LB I will break it out into a third party contribution. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
This commit adds a resource for the ARM load balancer. The ARM LB is
quite complicated and has many different configuration options. Instead
of trying to implement them all in a potential confusing user experience
this resources exposes the most common uses. With it a load balancer can
be configured on a single public IP or subnet which can route to a single
backend pool according to one load balancing rule and 1 probe.
This could be expanded to support more features of the LB or live along side of a more sophisticated load balancer implementation as an easy to use option.