-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot create azurerm_storage_container in azurerm_storage_account that uses network_rules #2977
Comments
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
We just ran into this ourselves. Nice to see someone else has already raised the issue with excellent documentation. The workaround we are testing is to call out to an ARM template for creating the containers. This is not ideal for several reasons:
But it's what we've got. This could be a workaround for you if you need this. I'm using two parts - a JSON file with the ARM, and a Terraform
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"storageAccountName": {
"type": "string"
},
"location": {
"type": "string"
}
},
"resources": [
{
"name": "[parameters('storageAccountName')]",
"type": "Microsoft.Storage/storageAccounts",
"apiVersion": "2018-07-01",
"location": "[parameters('location')]",
"resources": [
{
"name": "default/images",
"type": "blobServices/containers",
"apiVersion": "2018-07-01",
"dependsOn": [
"[parameters('storageAccountName')]"
]
},
{
"name": "default/backups",
"type": "blobServices/containers",
"apiVersion": "2018-07-01",
"dependsOn": [
"[parameters('storageAccountName')]"
]
}
]
}
]
}
resource "azurerm_storage_account" "standard-storage" {
name = "stdstorage"
location = "${var.location}"
resource_group_name = "${var.resource_group_name}"
account_tier = "Standard"
account_replication_type = "${var.standard_replication_type}"
enable_blob_encryption = "${var.standard_enable_blob_encryption}"
enable_https_traffic_only = true
network_rules {
ip_rules = "${var.firewall_allow_ips}"
virtual_network_subnet_ids = ["${var.vm_subnet_id}"]
}
}
resource "azurerm_template_deployment" "stdstorage-containers" {
name = "stdstorage-containers"
resource_group_name = "${var.resource_group_name}"
deployment_mode = "Incremental"
depends_on = [
"azurerm_storage_account.standard-storage",
]
parameters {
location = "${var.location}"
storageAccountName = "${azurerm_storage_account.standard-storage.name}"
}
template_body = "${file("${path.module}/storage-containers.json")}"
} |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I hit this bug using terraform 0.12.17 with AzureRM provider 1.37.0 and 1.38.0. @sschu I am also deploying from Azure DevOps hosted machines. The workaround I created was:
It's workable, but still a pain |
This comment has been minimized.
This comment has been minimized.
Hello, Currently, the creation of a storage container resource (blob, share) seems to use the storage container API which is behind the firewall. |
This comment has been minimized.
This comment has been minimized.
When using Azure DevOps hosted agents to deploy, I ended up writing this piece of Powershell that invokes Azure CLI to allow that specific agent's public IP address to be allowed into the Storage Account that had IP restrictions enabled. Like @jeffadavidson It's a script you can call as part of your deployments that will toggle the public IP of that agent either on or off (-mode switch). As mentioned I use it for Azure DevOps pipeline deployments, but it could be used anywhere else by other deployment tools...
I have this with as a step in my deployments with a -mode on that allows access to the SA I also have another step at the end with -mode off Note that you should run the -mode off step even if your deployment fails/crashes out, otherwise your SA firewall rules are going to get messy with lots of orphaned IP addresses in it. If you are using YAML based pipelines, that setting is...
...if using GUI based releases it is a setting under ADVANCED options |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
There is another reason for using AZAPI that is related to this issue. It surfaces when you disable storage account Shared Key authentication, as per "well architected framework" guidance:
In order to make this work with Terraform, you need to add provider "azurerm" {
features {}
storage_use_azuread = true
} This changes the behaviour of Terraform so that instead of fetching the shared keys and using those, it uses EntraID/AzureAD permissions of the principal running Terraform. Then, if you try and create a container using azurerm_storage_container it will fail, but if you use the AZAPI provider, it works. Similar reasons to the firewall problem noted in this issue. Obviously, if you want to do any data plane operations, you would need to set IAM appropriately, however if you just want to create containers and set permissions on them, AZAPI works fine. The GitHub action in this repository illustrates the behaviour of AZAPI vs AzureRM for this use case. It's worth noting there are some limitations when disabling shared key authentication, e.g. when using the table & files API as per the AzureRM provider documentation, however this approach works well for things like Terraform state (or any blob workload), and is useful where alignment to WAF principles is a requirement. |
Creating container using azurerm provider works just fine when public access to storage is disabled and private endpoint is setup correctly. This provider uses data plane operations to do so (i.e. Most of the corporate setups are
So what conserns me more than creating container is creating the There are two cures for this problem.
Btw exact same problem are having other reources as well, e.g. key vault. |
This has been a nightmare, due to static website properties it tries to read. It will only work if you get the setup right during creation, if there is anything wrong, or a DNS resolution is a mismatch it is not possible to fix without some manual trickery. It is also not possible to have it created with dynamic IPs as this requires that the private endpoint gets created before the A record and that creates the chicken and egg scenario. Whilst the issue with data plane is not resolved, can't a flag be set somewhere, anywhere, to forcefully discard anything related with static website properties? If I don't need it, won't use it, I would like to be allowed to disregard it. Since this would be an opt-in option, it wouldn't introduce any breaking change and would bring peace to this very annoying issue. I'd say that to make a change like this not be an ugly temporary patch, you could introduce a concept that applies to all resources of the provider, such as something like "global_options", that could be an open text field for situations like this. Edit: lifecycle {
ignore_changes = [static_website]
} 🤯 Except, it doesn't help. Still tries to retrieve static website properties. But it could be a solution... |
Local development indeed is a nightmare. Scenario:
And we have an error: Error: checking for existing Container "test-container" (Account "Account \"accountname\" (IsEdgeZone false / ZoneName \"\" / Subdomain Type \"blob\" / DomainSuffix \"core.windows.net\")"): executing request: unexpected status 403 (403 This request is not authorized to perform this operation.) with AuthorizationFailure: This request is not authorized to perform this operation.
│ RequestId:e0f6fe03-801e-0015-5776-9c1a69000000
│ Time:2024-05-02T09:50:15.9741260Z
│
│ with azurerm_storage_container.container[0],
│ on main.tf line 49, in resource "azurerm_storage_container" "container":
│ 49: resource "azurerm_storage_container" "container" { Any idea when it can be fixed? like @guderkar said, that setup is kinda common in the industry |
I once tested with Azure VPN Gateway + Client and I remember that there are two things essential
https://learn.microsoft.com/en-us/azure/vpn-gateway/azure-vpn-client-optional-configurations |
Originally posted by @tombuildsstuff in #26542 (comment) In near future, we may see an 4.0 which should allow to break things, right?
It's sometime that could be unmanaged |
I had a similar issue with creating file shares in a storage account that has public access disabled. The Azure Portal allowed me to create the file share through the public internet, so I knew it was possible to do so through the Azure API. The public access restriction only applies to accessing the data in the share, not creating the share. Anyways, to workaround this I created the storage account with the AzureRM provider as normal, with public access disabled. Then I use the AzApi provider to circumvent the bug/limitation in the AzureRM provider and hit the AzApi directly to manage the resource:
Took a little bit of tweaking and looking at the exported template, but the key was to get the parent_id right. I imagine for a blob container this would be blobServices instead of fileServices, but the concept should be similar. Now for blob services, it might not actually be possible to create the container over the public internet with public access disabled, you'll have to confirm that in the portal. But since it was working in the portal for file shares, I knew that it SHOULD work through TF one way or the other. Note that using this method, state is managed properly. I tested by deleting the share after it was created and TF plan/apply recreated it properly. There are other areas where I've been using the AzAPI provider as an "escape hatch" when something is broken or unsupported in AzureRM provider. |
It worked thank you. But how can I get some output values of the created resource? I reviewed the azapi_resource docs and it is not documented. In my case I need the resource_manager_id of the created container to add a role. Without azapi and public container this works,
But with azapi, I tried with
But I get this error:
|
Probably couple it with a |
The resource id is just the A data resource is not required. You might also consider looking at the Azure Verified Modules (https://aka.ms/AVM) for storage accounts. AVM modules provide a consistent interface for defining role assignments, inline with the resource: module "storage_account" {
source = "Azure/avm-res-storage-storageaccount/azurerm"
version = "0.2.5"
account_replication_type = "LRS"
location = azurerm_resource_group.this.location
name = local.storage_account_name
resource_group_name = azurerm_resource_group.this.name
containers = {
demo = {
name = "demo"
container_access_type = "private"
role_assignments = {
contributor = {
role_definition_id_or_name = "Storage Blob Data Contributor"
principal_id = <a principal_id UUID>
}
}
}
} There is a walkthrough that illustrates this being used for a VM deployment in the training on Learn here: |
I will try with id. Thank you |
Community Note
Terraform (and AzureRM Provider) Version
Terraform v0.11.11
Affected Resource(s)
azurerm_storage_account
azurerm_storage_container
Terraform Configuration Files
Debug Output
RequestId:a7f9d2e1-701e-00b3-4e74-cf3b34000000
Time:2019-02-28T14:45:53.7885750Z, RequestInitiated=Thu, 28 Feb 2019 14:45:53 GMT, RequestId=a7f9d2e1-701e-00b3-4e74-cf3b34000000, API Version=, QueryParameterName=, QueryParameterValue=
Expected Behavior
Container can be created in a storage account that uses network rules
Actual Behavior
After applying a network_rule to a storage account I cannot provision a container into it. My public IP is included in the address range specified in the network rule. I can successfully create the container via the Azure portal
Steps to Reproduce
terraform apply
The text was updated successfully, but these errors were encountered: