Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure Storage Blob: delete_blobs returns "tuple index out of range" when list_blobs are zero #12779

Closed
dr-dolittle opened this issue Jul 29, 2020 · 5 comments
Assignees
Labels
Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team. Storage Storage Service (Queues, Blobs, Files)

Comments

@dr-dolittle
Copy link

  • Package Name: azure-storage-blob
  • Package Version: 12.3.2
  • Operating System: Windows
  • Python Version: 3.8.1

I retrieve blobs matching a certain pattern and then delete them. Currently it is implemented with a (costly) workaround as you can see in the following issue: #12778

When passing blob_paged directly to delete_blobs it works as long as blob_paged is not zero, but if it is zero (i.e. no blob matches the prefix) it returns the exception tuple index out of range.

Can you please adjust the behavior that delete_blobs can handle blob_paged even if it is zero?

Thanks

@ghost ghost added needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. customer-reported Issues that are reported by GitHub users external to the Azure organization. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that labels Jul 29, 2020
@kaerm kaerm added Client This issue points to a problem in the data-plane of the library. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team Service Attention Workflow: This issue is responsible by Azure service team. Storage Storage Service (Queues, Blobs, Files) labels Jul 29, 2020
@ghost ghost removed the needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. label Jul 29, 2020
@ghost
Copy link

ghost commented Jul 29, 2020

Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @xgithubtriage.

@xiafu-msft
Copy link
Contributor

Hi @dr-dolittle

Thanks for reporting this issue! the pr above should resolve the problem!

@xiafu-msft
Copy link
Contributor

I guess we can close this one since it's describing the same thing as #12778 right?

@xiafu-msft
Copy link
Contributor

The fix has been release, so we will close this issue! Let me know if you want to reopen it or you have other questions!

@xiafu-msft
Copy link
Contributor

The fix has been release, so we will close this issue! Let me know if you want to reopen it or you have other questions!

@xiafu-msft xiafu-msft reopened this Aug 14, 2020
openapi-sdkautomation bot pushed a commit to AzureSDKAutomation/azure-sdk-for-python that referenced this issue Feb 9, 2021
@github-actions github-actions bot locked and limited conversation to collaborators Apr 12, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team question The issue doesn't require a change to the product in order to be resolved. Most issues start as that Service Attention Workflow: This issue is responsible by Azure service team. Storage Storage Service (Queues, Blobs, Files)
Projects
None yet
Development

No branches or pull requests

3 participants