Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot create azurerm_storage_container in azurerm_storage_account that uses network_rules #2977

Closed
phil-bevan opened this issue Feb 28, 2019 · 104 comments · Fixed by #27733
Closed
Labels
bug service/storage upstream/microsoft Indicates that there's an upstream issue blocking this issue/PR v/1.x (legacy)
Milestone

Comments

@phil-bevan
Copy link

phil-bevan commented Feb 28, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

Terraform v0.11.11

  • provider.azurerm v1.21.0

Affected Resource(s)

  • azurerm_storage_account
  • azurerm_storage_container

Terraform Configuration Files

resource "azurerm_storage_account" "test-storage-acct" {
  name                     = "${var.prefix}storacct"
  resource_group_name      = "${var.resgroup}"
  location                 = "${var.location}"
  account_tier             = "Standard"
  account_replication_type = "LRS"
  network_rules {
    ip_rules                   = ["aaa.bbb.ccc.ddd/ee"]
    virtual_network_subnet_ids = ["${var.subnetid}"]
  }
}
resource "azurerm_storage_container" "provisioning" {
  name                  = "${var.prefix}-provisioning"
  resource_group_name   = "${var.resgroup}"
  storage_account_name  = "${azurerm_storage_account.test-storage-acct.name}"
  container_access_type = "private"
}

Debug Output

  • azurerm_storage_container.provisioning: Error creating container "philtesting1-provisioning" in storage account "philtesting1storacct": storage: service returned error: StatusCode=403, ErrorCode=AuthorizationFailure, ErrorMessage=This request is not authorized to perform this operation.
    RequestId:a7f9d2e1-701e-00b3-4e74-cf3b34000000
    Time:2019-02-28T14:45:53.7885750Z, RequestInitiated=Thu, 28 Feb 2019 14:45:53 GMT, RequestId=a7f9d2e1-701e-00b3-4e74-cf3b34000000, API Version=, QueryParameterName=, QueryParameterValue=

Expected Behavior

Container can be created in a storage account that uses network rules

Actual Behavior

After applying a network_rule to a storage account I cannot provision a container into it. My public IP is included in the address range specified in the network rule. I can successfully create the container via the Azure portal

Steps to Reproduce

  1. terraform apply
@bs-matil

This comment has been minimized.

@sschu

This comment has been minimized.

@farseeker
Copy link

farseeker commented Mar 12, 2019

We just ran into this ourselves. Nice to see someone else has already raised the issue with excellent documentation.

The workaround we are testing is to call out to an ARM template for creating the containers. This is not ideal for several reasons:

  1. It's not Terraform-native
  2. It's more moving parts and more complicated to manage
  3. ARM templates only apply once, so if the configuration drifts over time Terraform will not set it back

But it's what we've got. This could be a workaround for you if you need this.

I'm using two parts - a JSON file with the ARM, and a Terraform azurerm_template_deployment

storage-containers.json

{
    "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "storageAccountName": {
            "type": "string"
        },
        "location": {
            "type": "string"
        }
    },
    "resources": [
        {
            "name": "[parameters('storageAccountName')]",
            "type": "Microsoft.Storage/storageAccounts",
            "apiVersion": "2018-07-01",
            "location": "[parameters('location')]",
            "resources": [
                {
                    "name": "default/images",
                    "type": "blobServices/containers",
                    "apiVersion": "2018-07-01",
                    "dependsOn": [
                        "[parameters('storageAccountName')]"
                    ]
                },
                {
                    "name": "default/backups",
                    "type": "blobServices/containers",
                    "apiVersion": "2018-07-01",
                    "dependsOn": [
                        "[parameters('storageAccountName')]"
                    ]
                }
            ]
        }
    ]
}

main.tf

resource "azurerm_storage_account" "standard-storage" {
  name                = "stdstorage"
  location            = "${var.location}"
  resource_group_name = "${var.resource_group_name}"

  account_tier              = "Standard"
  account_replication_type  = "${var.standard_replication_type}"
  enable_blob_encryption    = "${var.standard_enable_blob_encryption}"
  enable_https_traffic_only = true

  network_rules {
    ip_rules                   = "${var.firewall_allow_ips}"
    virtual_network_subnet_ids = ["${var.vm_subnet_id}"]
  }
}

resource "azurerm_template_deployment" "stdstorage-containers" {
  name                = "stdstorage-containers"
  resource_group_name = "${var.resource_group_name}"
  deployment_mode     = "Incremental"

  depends_on = [
    "azurerm_storage_account.standard-storage",
  ]

  parameters {
    location           = "${var.location}"
    storageAccountName = "${azurerm_storage_account.standard-storage.name}"
  }

  template_body = "${file("${path.module}/storage-containers.json")}"
}

@ranokarno

This comment has been minimized.

@tombuildsstuff

This comment has been minimized.

@amdiodati

This comment has been minimized.

@jeffadavidson
Copy link

I hit this bug using terraform 0.12.17 with AzureRM provider 1.37.0 and 1.38.0.

@sschu I am also deploying from Azure DevOps hosted machines. The workaround I created was:

  • Prior to the terraform tasks, run an Azure PowerShell task that gets the hosts public IP and adds it to the network rules using 'Add-AzStorageAccountNetworkRule'
  • Run Terraform steps
  • Remove network rule using Remove-AzStorageAccountNetworkRule

It's workable, but still a pain

@shea-parkes

This comment has been minimized.

@boillodmanuel
Copy link
Contributor

Hello,
It's seems related to this azure-cli issue: Azure/azure-cli#10190

Currently, the creation of a storage container resource (blob, share) seems to use the storage container API which is behind the firewall.
Instead, it should use the Resource Manager provider. In the issue mentionned above, I just discover that az cli has a az storage share-rm create in addition to existing az storage share create. I don't know if there is an equivalent for blob, and if this exists in the azure rest API or in terraform :)

@imduchy

This comment has been minimized.

@mcalnd70
Copy link

mcalnd70 commented May 26, 2020

When using Azure DevOps hosted agents to deploy, I ended up writing this piece of Powershell that invokes Azure CLI to allow that specific agent's public IP address to be allowed into the Storage Account that had IP restrictions enabled. Like @jeffadavidson

It's a script you can call as part of your deployments that will toggle the public IP of that agent either on or off (-mode switch).

As mentioned I use it for Azure DevOps pipeline deployments, but it could be used anywhere else by other deployment tools...

<#
.SYNOPSIS
Set (by mode: ON OFF) the Storage Account Firewall Rules by Public IP address. Used by Azure DevOps Build/Release agents
See here : https://github.com/terraform-providers/terraform-provider-azurerm/issues/2977
.DESCRIPTION
Using Azure CLI
.EXAMPLE
.\SetMode_PublicIPAddress_SA.ps1 -storageaccount sa12345random -resourcegroup RG-NDM-TEST -mode on
.NOTES
Written by Neil McAlister - March 2020
#>
param (
	[Parameter(Mandatory=$true)]
	[string]$storageaccount,
	[Parameter(Mandatory=$true)]
        [string]$resourcegroup,
        [Parameter(Mandatory=$true)]
	[string]$mode
)
#
$ip = Invoke-RestMethod http://ipinfo.io/json | Select -exp ip
write-host $ip
#
if ($mode -eq 'on') { 
az storage account network-rule add --resource-group $resourcegroup --account-name $storageaccount --ip-address $ip
} 
#
if ($mode -eq 'off') {
az storage account network-rule remove --resource-group $resourcegroup --account-name $storageaccount --ip-address $ip
}

I have this with as a step in my deployments with a -mode on that allows access to the SA

I also have another step at the end with -mode off Note that you should run the -mode off step even if your deployment fails/crashes out, otherwise your SA firewall rules are going to get messy with lots of orphaned IP addresses in it.

If you are using YAML based pipelines, that setting is...

condition: always()

...if using GUI based releases it is a setting under ADVANCED options

@jjindrich

This comment has been minimized.

@TheKangaroo

This comment has been minimized.

@tenderitaf

This comment has been minimized.

@DustinChaloupka

This comment has been minimized.

@bitsofinfo

This comment has been minimized.

@dangibbons-ph

This comment was marked as off-topic.

@smits23

This comment was marked as off-topic.

@kewalaka
Copy link

kewalaka commented Nov 24, 2023

There is another reason for using AZAPI that is related to this issue. It surfaces when you disable storage account Shared Key authentication, as per "well architected framework" guidance:

In order to make this work with Terraform, you need to add storage_use_azuread = true to your provider block, i.e. something like this:

provider "azurerm" {
  features {}
  storage_use_azuread        = true
}

This changes the behaviour of Terraform so that instead of fetching the shared keys and using those, it uses EntraID/AzureAD permissions of the principal running Terraform.

Then, if you try and create a container using azurerm_storage_container it will fail, but if you use the AZAPI provider, it works. Similar reasons to the firewall problem noted in this issue.

Obviously, if you want to do any data plane operations, you would need to set IAM appropriately, however if you just want to create containers and set permissions on them, AZAPI works fine.

The GitHub action in this repository illustrates the behaviour of AZAPI vs AzureRM for this use case.

It's worth noting there are some limitations when disabling shared key authentication, e.g. when using the table & files API as per the AzureRM provider documentation, however this approach works well for things like Terraform state (or any blob workload), and is useful where alignment to WAF principles is a requirement.

@guderkar
Copy link

guderkar commented Jan 17, 2024

Creating container using azurerm provider works just fine when public access to storage is disabled and private endpoint is setup correctly. This provider uses data plane operations to do so (i.e. blob.core.windows.net / dfs.core.windows.net API calls).

Most of the corporate setups are

  • use proxy for internet access (proxy do the DNS resolving and most of the time it can resolve only public IPs, not internal) -> only control plane operations works
  • use direct access to access internal resources (the deployment agent do the DNS resolving) it can resolve internal (privatelink) addresses and access private endpoints (NO_PROXY=blob.core.windows.net,dfs.core.windows.net environment variable must be set) and direct access is ment only for communication with internal resources -> not possible to reach internet directly.

So what conserns me more than creating container is creating the azurerm_storage_account resource which does those data plane operations on its creation. This means when I have this NO_PROXY variable set the deployment agent will resolve the data plane API call to public IP (because private endpoint does not exist yet) and goes directly via internal network which is not ment for internet access -> will get timeout and fail. This is kind of a vicious cycle because private endpoint cannot be created before the storage account.

There are two cures for this problem.

  • one mentioned -> use azapi to create the storage account because azapi does only control plane operations
  • on deployment agents install local proxy server which will do the following
    • when domain resolves to public IP forward it to corporate proxy
    • when doman resolves to private IP go directly (don't use proxy)

Btw exact same problem are having other reources as well, e.g. key vault.

@fabio-s-franco
Copy link

fabio-s-franco commented Mar 26, 2024

This has been a nightmare, due to static website properties it tries to read.
If public network access is disabled, you're done. Good luck.

It will only work if you get the setup right during creation, if there is anything wrong, or a DNS resolution is a mismatch it is not possible to fix without some manual trickery.

It is also not possible to have it created with dynamic IPs as this requires that the private endpoint gets created before the A record and that creates the chicken and egg scenario.

Whilst the issue with data plane is not resolved, can't a flag be set somewhere, anywhere, to forcefully discard anything related with static website properties? If I don't need it, won't use it, I would like to be allowed to disregard it. Since this would be an opt-in option, it wouldn't introduce any breaking change and would bring peace to this very annoying issue.

I'd say that to make a change like this not be an ugly temporary patch, you could introduce a concept that applies to all resources of the provider, such as something like "global_options", that could be an open text field for situations like this.

Edit:
It just came to me, I could simply use:

lifecycle {
    ignore_changes  = [static_website]
  }

🤯

Except, it doesn't help. Still tries to retrieve static website properties. But it could be a solution...
@tombuildsstuff I saw your comment here (#20257 (comment)), shouldn't something like the above do exactly what it was being asked, albeit explicitly? I am surprised I haven't thought of that myself before and also that this doesn't seem to matter for the provider. If ignore_changes is analogous to not track you mentioned, this would make it very explicit, for whoever wants to do so and actually fits terraform's resource model.

@michasacuer
Copy link

Local development indeed is a nightmare.

Scenario:

  • Storage Account connected to Private Endpoint
  • Private endpoint for blobs connected to private dns
  • Connected to VPN on local machine

And we have an error:

Error: checking for existing Container "test-container" (Account "Account \"accountname\" (IsEdgeZone false / ZoneName \"\" / Subdomain Type \"blob\" / DomainSuffix \"core.windows.net\")"): executing request: unexpected status 403 (403 This request is not authorized to perform this operation.) with AuthorizationFailure: This request is not authorized to perform this operation.
│ RequestId:e0f6fe03-801e-0015-5776-9c1a69000000
│ Time:2024-05-02T09:50:15.9741260Z
│
│   with azurerm_storage_container.container[0],
│   on main.tf line 49, in resource "azurerm_storage_container" "container":
│   49: resource "azurerm_storage_container" "container" {

Any idea when it can be fixed? like @guderkar said, that setup is kinda common in the industry

@guderkar
Copy link

guderkar commented May 6, 2024

I once tested with Azure VPN Gateway + Client and I remember that there are two things essential

  • When connected to VPN make sure you are using DNS servers from internal network
  • VPN must advertise to the client the private networks where the resources with private endpoint are deployed

https://learn.microsoft.com/en-us/azure/vpn-gateway/azure-vpn-client-optional-configurations

@jkroepke
Copy link
Contributor

jkroepke commented Jul 4, 2024

Whilst we may look to split the Data Plane functionality out to a separate resource in the future, unfortunately doing so breaks some scenarios (for example, guaranteeing that there's no static website configured, which would need to be done within the main azurerm_storage_account resource).

Originally posted by @tombuildsstuff in #26542 (comment)

In near future, we may see an 4.0 which should allow to break things, right?

guaranteeing that there's no static website configured

It's sometime that could be unmanaged azurerm_storage_account and having a dedicated resource like azurerm_storage_account_static_website_configuration?

Similar to https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_website_configuration

@gburkecw
Copy link

gburkecw commented Aug 1, 2024

I had a similar issue with creating file shares in a storage account that has public access disabled. The Azure Portal allowed me to create the file share through the public internet, so I knew it was possible to do so through the Azure API. The public access restriction only applies to accessing the data in the share, not creating the share.

Anyways, to workaround this I created the storage account with the AzureRM provider as normal, with public access disabled. Then I use the AzApi provider to circumvent the bug/limitation in the AzureRM provider and hit the AzApi directly to manage the resource:

resource "azapi_resource" "myshare" {
  type      = "Microsoft.Storage/storageAccounts/fileServices/shares@2023-05-01"
  name      = "myshare"
  parent_id = "${azurerm_storage_account.st-smbshare.id}/fileServices/default"

  body = {
    properties = {
      accessTier : "Hot",
      shareQuota : 1024,
      enabledProtocols : "SMB"
    }
  }
}

Took a little bit of tweaking and looking at the exported template, but the key was to get the parent_id right. I imagine for a blob container this would be blobServices instead of fileServices, but the concept should be similar.

Now for blob services, it might not actually be possible to create the container over the public internet with public access disabled, you'll have to confirm that in the portal. But since it was working in the portal for file shares, I knew that it SHOULD work through TF one way or the other.

Note that using this method, state is managed properly. I tested by deleting the share after it was created and TF plan/apply recreated it properly. There are other areas where I've been using the AzAPI provider as an "escape hatch" when something is broken or unsupported in AzureRM provider.

@jrichardsz
Copy link

I ran into this awhile back and would manually create the containers. I finally figured out the simplified AzAPI code and am posting it here. This will create a storage account that disables public access and enables NFSv3, then create a container in that account.

resource "azurerm_storage_account" "group_blob_storage" {
  name                      = "example_storage_account"
  resource_group_name       = local.app_rg_name
  location                  = local.location
  account_kind              = "StorageV2"
  account_tier              = "Standard"
  access_tier               = "Hot"
  account_replication_type  = "LRS"
  enable_https_traffic_only = true
  is_hns_enabled            = true
  nfsv3_enabled             = true
  min_tls_version           = "TLS1_2"
  allow_blob_public_access  = false
  tags                      = local.default_tags
  lifecycle {
    ignore_changes = [
      tags["CreationDate"],
    ]
  }
  network_rules {
    default_action = "Deny"
  }
}

resource "azapi_resource" "group_blob_containers" {
  type      = "Microsoft.Storage/storageAccounts/blobServices/containers@2022-09-01"
  name      = "mycontainer"
  parent_id = "${azurerm_storage_account.group_blob_storage.id}/blobServices/default"
  body = jsonencode({
    properties = {
      defaultEncryptionScope      = "$account-encryption-key"
      denyEncryptionScopeOverride = false
      enableNfsV3AllSquash        = false
      enableNfsV3RootSquash       = false
      metadata                    = {}
      publicAccess                = "None"
    }
  })
  depends_on = [
    azurerm_storage_account.group_blob_storage
  ]
}

You can change the json-encoded properties as needed.

It worked thank you.

But how can I get some output values of the created resource? I reviewed the azapi_resource docs and it is not documented.

In my case I need the resource_manager_id of the created container to add a role.

Without azapi and public container this works,

resource "azurerm_storage_container" "acme_container" {
  name                 = "acme-container"
  storage_account_name = azurerm_storage_account.storage.name
}

resource "azurerm_role_assignment" "acme_containert_blob_data_owner_to_subscriber_msi" {
  principal_id         = "${var.principal_id}"
  scope                = azurerm_storage_container.acme_container.resource_manager_id
  role_definition_name = "Storage Blob Data Owner"
}

But with azapi, I tried with

scope = azapi_resource.acme_container.resource_manager_id

But I get this error:

This object has no argument, nested block, or exported attribute named "resource_manager_id".

@dhirschfeld
Copy link
Contributor

Probably couple it with a data object which depends on the azapi_resource

@kewalaka
Copy link

This object has no argument, nested block, or exported attribute named "resource_manager_id".

The resource id is just the .id property, ref:

https://github.com/Azure/terraform-provider-azapi/blob/main/docs/resources/resource_action.md#read-only

A data resource is not required.

You might also consider looking at the Azure Verified Modules (https://aka.ms/AVM) for storage accounts. AVM modules provide a consistent interface for defining role assignments, inline with the resource:

module "storage_account" {
  source  = "Azure/avm-res-storage-storageaccount/azurerm"
  version = "0.2.5"

  account_replication_type          = "LRS"
  location                          = azurerm_resource_group.this.location
  name                              = local.storage_account_name
  resource_group_name               = azurerm_resource_group.this.name

  containers = {
    demo = {
      name                  = "demo"
      container_access_type = "private"
      role_assignments = {
        contributor = {
          role_definition_id_or_name = "Storage Blob Data Contributor"
          principal_id               = <a principal_id UUID>
        }
      }
    }
  }

There is a walkthrough that illustrates this being used for a VM deployment in the training on Learn here:
https://learn.microsoft.com/en-us/samples/azure-samples/avm-terraform-labs/avm-terraform-labs/

@jrichardsz
Copy link

I will try with id.

Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment