Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[azservicebus] Receiver indefinitely "stuck" after a long idle period #18517

Closed
tkent opened this issue Jul 1, 2022 · 27 comments
Closed

[azservicebus] Receiver indefinitely "stuck" after a long idle period #18517

tkent opened this issue Jul 1, 2022 · 27 comments
Assignees
Labels
bug This issue requires a change to an existing behavior in the product in order to be resolved. Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-author-feedback Workflow: More information is needed from author to address the issue. no-recent-activity There has been no recent activity on this issue. Service Bus
Milestone

Comments

@tkent
Copy link

tkent commented Jul 1, 2022

Bug Report

After a long period of inactivity, a receiver will stop receiving new messages. A "long period" is sometime more than 8 hours and less than 13 days, but exactly how long is unknown.

(The problem was originally brought up in this comment)

This is very straightforward to demonstrate if you are willing to wait and setup a dedicated service bus instance. It occurs frequently with infrastructure used for QA, since that will often not receive any activity over weekends and holidays.

SDK Versions Used

I have seen this behavior across many versions of the azure-sdk-for-go, but the most recent test was conducted using these versions:

github.com/Azure/azure-sdk-for-go/sdk/azcore v1.1.0
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.1.0
github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.0.1

About the most recent time this was reproduced

We most recently reproduced this by running a small golang app in an AKS cluster using a managed identity assigned by aad-pod-identity.

In this test, we setup a dedicated azure service bus + managed identity (terraform below) and let the app run. After 13 days, we came back to it . No errors had been emitted, just the regular startup message for the app. I then entered a message into the bus. The receiver in the app did not pickup the message after waiting 30 minutes. We deleted the pod running the app and allowed it to be recreated by the deployment. The replacement pod immediately picked up the message.

Workaround

We can work around this issue by polling for messages using a 10 minute timeout and restarting in a loop. Our workaround looks like this and is known to work for weeks without an issue.

rcvrCtxForSdkWorkaround, canceller := context.WithTimeout(ctx, 10*time.Minute)
messages, err := azsbReceiver.ReceiveMessages(rcvrCtxForSdkWorkaround, 1, nil)
canceller()
if err != nil && !errors.Is(err, context.DeadlineExceeded) {
	r.logger.Info(EvtNameErrRetrievingMsgs, map[string]string{
		"error": err.Error(),
	})
	continue
}
// This just means that the context was closed before any messages
// were picked up. This could have been the context (master or)
// rcvrCtxForSdkWorkaround, so a loop is required.
if len(messages) == 0 {
	continue
}

The terraform for the test bus

The terraform below was used to setup the test bus and assign the app identity access to it.

resource "azurerm_servicebus_namespace" "ns" {
  name                = "fixturens${local.deploy_token}"
  location            = local.location
  resource_group_name = data.azurerm_resource_group.rg.name
  sku                 = "Standard"
  tags                = local.standard_tags
}

resource "azurerm_servicebus_queue" "test" {
  name                                 = "test"
  namespace_id                         = azurerm_servicebus_namespace.ns.id
  dead_lettering_on_message_expiration = false
  enable_partitioning                  = false
  default_message_ttl                  = "PT48H"
}

resource "azurerm_role_assignment" "full_access_ra" {
  for_each     = local.authorized_authorized_principal_ids_as_map
  scope        = azurerm_servicebus_queue.test.id
  principal_id = each.value
  role_definition_name = "Azure Service Bus Data Owner"
}
@ghost ghost added needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. customer-reported Issues that are reported by GitHub users external to the Azure organization. question The issue doesn't require a change to the product in order to be resolved. Most issues start as that labels Jul 1, 2022
@richardpark-msft richardpark-msft self-assigned this Jul 1, 2022
@richardpark-msft
Copy link
Member

Hi @tkent, thank you for filing this issue. I know it's frustrating to deal with a bug, so I appreciate you working with me on this.

We have tests for these kinds of scenarios but clearly, since you're seeing a bug, I'm missing something. I'll see what I'm mising there.

@tkent
Copy link
Author

tkent commented Jul 1, 2022

@richardpark-msft - Hey, I appreciate you looking into it. Frustrating, yes, but it would be much more frustrating if we didn't have a work around or we filed an issue that gets dismissed/ignored.

Priority wise, since we have a work around it's not high on our list. That said, I'd imagine others don't want to have to go through the learning process on this one.

@jhendrixMSFT jhendrixMSFT added Service Bus Client This issue points to a problem in the data-plane of the library. and removed question The issue doesn't require a change to the product in order to be resolved. Most issues start as that needs-triage Workflow: This is a new issue that needs to be triaged to the appropriate team. labels Jul 5, 2022
@ghost ghost added the needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team label Jul 5, 2022
@RickWinter RickWinter added the bug This issue requires a change to an existing behavior in the product in order to be resolved. label Jul 5, 2022
@RickWinter RickWinter added this to the 2022-10 milestone Jul 5, 2022
richardpark-msft added a commit that referenced this issue Nov 8, 2022
…alls (#19506)

I added in a simple idle timer in #19465, which would expire the link if our internal message receive went longer than 5 minutes. This extends that to track it across multiple consecutive calls as well, in case the user calls and cancels multiple times in a row, eating up 5 minutes of wall-clock time.

This is actually pretty similar to the workaround applied by the customer here in #18517 but tries to take into account multiple calls and also recovers the link without exiting ReceiveMessages().
@richardpark-msft
Copy link
Member

Hey @tkent , I added in a client-side idle timer that does something similar to what you outlined above. It recycles the link if nothing is received for 5 minutes, under the covers. It was released in azservicebus 1.1.2

Closing this now as we've formally implemented something similar to your workaround :).

This should help combat a situation I've been worried about for a bit. If the server idles out our link or detaches it and we miss it then our link will still look alive during these quiet times, even if it's never going to work. We now close out the link and attempt to recreate it, which will force a reconciling between the service and client.

@richardpark-msft
Copy link
Member

Reopening as there's still work for this.

@RickWinter RickWinter modified the milestones: 2022-10, 2023-04 Jan 10, 2023
@rokf
Copy link

rokf commented Jan 24, 2023

Hello @tkent 🙂

Are you still using the same workaround?

Have you perhaps tried the same solution but with a shorter timeout duration? If yes, have you noticed any differences?

@richardpark-msft
Copy link
Member

@rokf , are you seeing this issue as well?

@tkent
Copy link
Author

tkent commented Jan 24, 2023

👋 @rokf - I am still using this work around. The workaround has been stable for the past several months, so we haven't touched it. I picked 10 minutes arbitrarily and we've not tried a lower time (though I'm sure that would work).

However... @richardpark-msft it's important to note that just last Wednesday (01/18), we might have experienced the issue in our prod environment. We had a situation where we again we stopped receiving messages and had no errors emitted from the SDK. This time, unlike others, our Azure Service Bus UserError metric started climbing, indicating some interaction with the bus. Restarting the app caused messages to start being picked up again.

We're adding more debugging to try to figure out what on earth happened, but that may have been this same problem showing up after 6+ months with the work around. The lack of visibility into the problem makes it very hard to tell.

@rokf
Copy link

rokf commented Jan 24, 2023

@tkent Thank you 👍

@richardpark-msft yes, it seems so. We've had similar problems in the past with the previous SDK (messages not being read). Things were looking good for a while with the new SDK (this one) since we've migrated. Last week something started happening (see graphs below).

The version that we're using is:

github.com/Azure/azure-sdk-for-go/sdk/messaging/azservicebus v1.1.3

Restarting indeed works - the messages get picked up immediately.

active_connections

server_errors

We're planning to set up alerts on the production Service Bus namespaces to catch messages that are stuck and we'll try to put additional logs into the client for the time being.

We're also looking into the possibility to introduce some kind of periodic queue connection health checks.

@tkent
Copy link
Author

tkent commented Jan 30, 2023

@rokf - We are not, we keep the same receiver for the lifetime of the application execution.

It may be helpful to know that the service using this workaround doesn't processing many messages yet. In our highest volume environment, a given instance will receive less than 500 messages a day and our peak messages per second is like 2-3. We wouldn't have seen any intermitted issues that come up during even moderate load yet.

@alesbrelih
Copy link

We are also experiencing same problems since the 14.1.2023. This can be seen in the graph below which shows active connections. The spikes back up indicate that we have restarted containers.

Must also mention that our traffic isn't really high and most of the times the receivers are idling.

Screenshot 2023-02-01 at 13 21 41

@richardpark-msft Do you perhaps have some more info on this?

@richardpark-msft
Copy link
Member

@alesbrelih, can you generate a log for your failures? I'm working on a few fixes - the first one that should come out in the next release will introduce a timeout when we close links, which can hang indefinitely in some situations.

https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/messaging/azservicebus/README.md#troubleshooting

@alesbrelih
Copy link

alesbrelih commented Feb 1, 2023

@richardpark-msft

I've checked logs and here are the results.

Stuck consumer:

[azsb.Receiver] Received 0/10 messages
[azsb.Conn] No close needed for cancellation
[azsb.Receiver] Received 0/10 messages
[azsb.Conn] No close needed for cancellation
[azsb.Receiver] [Rx9Ea3fu0qH3bAwVzdWlwRSw462mfoUI60nRTVwpZ6Vrn4-ArS-3qw] Message releaser starting...
[azsb.Receiver] [Rx9Ea3fu0qH3bAwVzdWlwRSw462mfoUI60nRTVwpZ6Vrn4-ArS-3qw] Message releaser pausing. Released 0 messages
[azsb.Receiver] Received 0/10 messages
[azsb.Receiver] [_-ywyRZ5M9boovToMfH9g27-frlI2dzeBlDIWwk0FrYUDpdune7AeA] Message releaser starting...
[azsb.Receiver] Asking for 10 credits
[azsb.Receiver] [_-ywyRZ5M9boovToMfH9g27-frlI2dzeBlDIWwk0FrYUDpdune7AeA] Message releaser pausing. Released 0 messages
[azsb.Receiver] No additional credits needed, still have 10 credits active
[azsb.Receiver] Received 0/10 messages
[azsb.Receiver] Asking for 10 credits
[azsb.Receiver] No additional credits needed, still have 10 credits active
[azsb.Conn] No close needed for cancellation
[azsb.Conn] No close needed for cancellation
[azsb.Receiver] [VJvcP9DFGERBZZBaxEMs_9z-fQ4A4XwF7oyhc5KThd1z6niYMPsALA] Message releaser starting...
[azsb.Receiver] [VJvcP9DFGERBZZBaxEMs_9z-fQ4A4XwF7oyhc5KThd1z6niYMPsALA] Message releaser pausing. Released 0 messages
[azsb.Receiver] Asking for 10 credits
[azsb.Receiver] [nZybya7rmta1Tc7-S2a8JyRHknmi9PjJpwKqjhGqfeKmQd60HJvYsg] Message releaser starting...
[azsb.Receiver] No additional credits needed, still have 10 credits active
[azsb.Receiver] [nZybya7rmta1Tc7-S2a8JyRHknmi9PjJpwKqjhGqfeKmQd60HJvYsg] Message releaser pausing. Released 0 messages
[azsb.Receiver] Asking for 10 credits
[azsb.Receiver] No additional credits needed, still have 10 credits active
[azsb.Receiver] Received 0/10 messages
[azsb.Conn] No close needed for cancellation
[azsb.Receiver] Received 0/10 messages
[azsb.Receiver] Received 0/10 messages
[azsb.Conn] No close needed for cancellation
[azsb.Receiver] Received 0/10 messages
[azsb.Conn] No close needed for cancellation
[azsb.Receiver] [_-ywyRZ5M9boovToMfH9g27-frlI2dzeBlDIWwk0FrYUDpdune7AeA] Message releaser starting...
[azsb.Receiver] [_-ywyRZ5M9boovToMfH9g27-frlI2dzeBlDIWwk0FrYUDpdune7AeA] Message releaser pausing. Released 0 messages
[azsb.Receiver] Asking for 10 credits
[azsb.Receiver] No additional credits needed, still have 10 credits active
[azsb.Receiver] [VJvcP9DFGERBZZBaxEMs_9z-fQ4A4XwF7oyhc5KThd1z6niYMPsALA] Message releaser starting...
[azsb.Receiver] [nZybya7rmta1Tc7-S2a8JyRHknmi9PjJpwKqjhGqfeKmQd60HJvYsg] Message releaser starting...
[azsb.Conn] No close needed for cancellation
[azsb.Receiver] [Rx9Ea3fu0qH3bAwVzdWlwRSw462mfoUI60nRTVwpZ6Vrn4-ArS-3qw] Message releaser starting...
[azsb.Receiver] [Rx9Ea3fu0qH3bAwVzdWlwRSw462mfoUI60nRTVwpZ6Vrn4-ArS-3qw] Message releaser pausing. Released 0 messages
[azsb.Receiver] [nZybya7rmta1Tc7-S2a8JyRHknmi9PjJpwKqjhGqfeKmQd60HJvYsg] Message releaser pausing. Released 0 messages
[azsb.Receiver] Asking for 10 credits
[azsb.Receiver] No additional credits needed, still have 10 credits active
[azsb.Receiver] [VJvcP9DFGERBZZBaxEMs_9z-fQ4A4XwF7oyhc5KThd1z6niYMPsALA] Message releaser pausing. Released 0 messages
[azsb.Receiver] Asking for 10 credits
[azsb.Receiver] Asking for 10 credits
[azsb.Receiver] No additional credits needed, still have 10 credits active
[azsb.Receiver] No additional credits needed, still have 10 credits active

If I compare logs of the broken consumer with a working one, I'm missing refreshing/negotiate claims [Auth] logs 🤔 The last time the Auth refresh happened it was 2 days ago, and since then there is no mention of it. Even though it says it will expire in 15 minutes.

It is also accompanied with context deadline exceeded errors that are not present before this:

2023-01-30T21:21:24.773171761Z [azsb.Auth] (mock-queue/$DeadLetterQueue/$management) refreshing claim
2023-01-30T21:21:24.773198251Z [azsb.Auth] (mock-queue/$DeadLetterQueue) refreshing claim
2023-01-30T21:21:24.773201858Z [azsb.Auth] (mock-queue/$DeadLetterQueue) negotiate claim, token expires on 2023-01-30T21:36:24Z
2023-01-30T21:21:24.773203671Z [azsb.Auth] (mock-queue/$DeadLetterQueue/$management) negotiate claim, token expires on 2023-01-30T21:36:24Z
2023-01-30T21:22:11.244252046Z [azsb.Auth] (mock-queue) negotiate claim, failed: context deadline exceeded
2023-01-30T21:22:11.244282373Z [azsb.Receiver] (receiveMessages.getlinks) Retry attempt 0 was cancelled, stopping: context deadline exceeded
2023-01-30T21:22:11.244288625Z [azsb.Auth] (mock-queue) refreshing claim
2023-01-30T21:22:11.244313101Z [azsb.Auth] (mock-queue) negotiate claim, token expires on 2023-01-30T21:37:11Z
2023-01-30T21:22:11.244317740Z [azsb.Auth] (mock-queue/$DeadLetterQueue) negotiate claim, failed: context deadline exceeded
2023-01-30T21:22:11.244321717Z [azsb.Receiver] (receiveMessages.getlinks) Retry attempt 0 was cancelled, stopping: context deadline exceeded
2023-01-30T21:22:11.244326336Z [azsb.Auth] (mock-queue) negotiate claim, failed: context deadline exceeded
2023-01-30T21:22:11.244329682Z [azsb.Auth] (mock-queue/$DeadLetterQueue) refreshing claim
2023-01-30T21:22:11.244332768Z [azsb.Auth] (mock-queue/$DeadLetterQueue) negotiate claim, token expires on 2023-01-30T21:37:11Z
2023-01-30T21:22:11.244344831Z [azsb.Auth] (mock-queue/$DeadLetterQueue) negotiate claim, failed: context deadline exceeded
2023-01-30T21:22:11.244346664Z [azsb.Receiver] (receiveMessages.getlinks) Retry attempt 0 was cancelled, stopping: context deadline exceeded
2023-01-30T21:22:11.244349379Z [azsb.Auth] (mock-queue/$DeadLetterQueue) refreshing claim
2023-01-30T21:22:11.244362454Z [azsb.Auth] (mock-queue/$DeadLetterQueue) negotiate claim, token expires on 2023-01-30T21:37:11Z
2023-01-30T21:22:11.244365319Z [azsb.Receiver] (receiveMessages.getlinks) Retry attempt 0 was cancelled, stopping: context deadline exceeded
2023-01-30T21:22:11.244367303Z [azsb.Auth] (mock-queue) refreshing claim
2023-01-30T21:22:11.244369016Z [azsb.Auth] (mock-queue) negotiate claim, token expires on 2023-01-30T21:37:11Z
2023-01-30T21:22:24.233462503Z [azsb.Receiver] Received 0/10 messages
2023-01-30T21:22:24.233494573Z [azsb.Conn] No close needed for cancellation
2023-01-30T21:22:24.233499503Z [azsb.Receiver] Received 0/10 messages
2023-01-30T21:22:24.233501857Z [azsb.Conn] No close needed for cancellation
2023-01-30T21:22:24.233503480Z [azsb.Receiver] Received 0/10 messages

If you need more data or something specific please let me know.

@rokf
Copy link

rokf commented Feb 9, 2023

@richardpark-msft Hey Richard, we've noticed that a new version has been released - https://github.com/Azure/azure-sdk-for-go/tree/sdk/messaging/azservicebus/v1.2.0. Do you suggest that we switch to that version?

@richardpark-msft
Copy link
Member

richardpark-msft commented Feb 9, 2023

@richardpark-msft Hey Richard, we've noticed that a new version has been released - https://github.com/Azure/azure-sdk-for-go/tree/sdk/messaging/azservicebus/v1.2.0. Do you suggest that we switch to that version?

I always do, but I'm a bit biased. :) I'm still working on adding some resiliency to see if we can help more with a part of this situation. From what I can tell we're getting into a situation where the link hasn't re-issued credits (which tell Service Bus to send us more messages). I have a fix in the works for that, but I'm still going over ramifications for it.

The bug fix I added for this release was to make it so our internal Close() function would time out - prior to this it could actually hang for a long time and if it was cancelled it could leave things in an inconsistent state. This can affect things even if you aren't specifically calling Close() since we will close things internally during network recovery.

So yes, definitely recommend. There are some additional logging messages now that show if the code path is being triggered and is remediating the issue (note, these are always subject to change in the future):

Connection reset for recovery instead of link. Link closing has timed out.
Connection closed instead. Link closing has timed out.

If your client was stuck or seems to disappear during the recovery process this could be the root cause.

@alesbrelih
Copy link

We are still having issues with this.

Is this something that is still being worked on?

@richardpark-msft
Copy link
Member

There's been several fixes in this area, can you give me more details @alesbrelih?

Specifically, there have also been some changes on the service-side, and we have been working on the go-amqp stack as well to improve reliability.

I'd need internal logs from your situation as well. See here on how to enable them:
https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/messaging/azservicebus/README.md#logging

@richardpark-msft
Copy link
Member

@alesbrelih, sorry I missed that you've provided details above. I'm following up on a fix that was made on the service-side that matches what you were seeing, in that you have valid credits on the link but no messages are being returned. I'll post back here when I have details.

@tkent
Copy link
Author

tkent commented Mar 22, 2023

@richardpark-msft

I'm following up on a fix that was made on the service-side that matches what you were seeing, in that you have valid credits on the link but no messages are being returned.

That's a very useful insight. I think that explains what we saw a few times and means I can stop chasing it.

@rokf
Copy link

rokf commented Mar 24, 2023

Can this issue affect Senders as well?

We have some code that opens a client, sends a message and then closes the client. This isn't performant and can take a couple of seconds before the message is sent to ASB. We'd like to reduce the time it takes to send the message using long living clients (senders), just as for the readers, and I was wondering if we can do more harm than good in the current situation?

@richardpark-msft
Copy link
Member

@rokf, the approach you have is basically eating the cost of starting up a TCP connection each time. You can improve this a few ways:

  • Use a single azservicebus.Client instance. The Client owns the expensive part (the TCP connection) so keeping that around and using it when creating your Senders will make things faster.
  • If you want you can also go further and use a single Sender instance as well - this'll have the same effect but also cache the link, which does have some startup cost as well.

Now, in either case, it's possible for the connection to have to restart if you idle out, so there might still be some initialization cost based on network conditions, etc... But the tactics above are the easiest ways to avoid having to eat the startup cost on every send.

@richardpark-msft
Copy link
Member

Hi all, just want to update with some info.

There have been a lot of fixes in the latest package release to improve reliability/resilience to azservicebus: azservicebus@v1.3.0.

In this release, we incorporated the GA version of go-amqp (the underlying protocol stack for this package). A lot of the work that went into this was around deadlocks and race conditions within the library that could cause it to become unresponsive. In addition there have been service-side fixes related to messages not being delivered, despite being available.

Every release brings improvements but this one hits at the core of the stack and should yield improvements in overall reliability, especially in some potential corner cases with connection/link recovery.

I'd encourage all the people involved in this thread to upgrade.

@richardpark-msft richardpark-msft added customer-response-expected and removed needs-team-attention Workflow: This issue needs attention from Azure service team or SDK team labels May 15, 2023
@rokf
Copy link

rokf commented May 16, 2023

Thank you Richard, we'll update ASAP 👍

@tkent
Copy link
Author

tkent commented May 25, 2023

@richardpark-msft. I will roll out the new library version in our lower envs and let it run for a bit. However, I'm not going to roll it out to anywhere in our "live paths". While validation of this type of thing isn't nearly as useful without real traffic, the work around I have in place is stable and I don't want to introduce risk there.

In the original description, I included some terraform and a description of how reproduce the issue. Has this library been tested against that use case or some comparable one?

@richardpark-msft
Copy link
Member

@richardpark-msft. I will roll out the new library version in our lower envs and let it run for a bit. However, I'm not going to roll it out to anywhere in our "live paths". While validation of this type of thing isn't nearly as useful without real traffic, the work around I have in place is stable and I don't want to introduce risk there.

Appreciate all the testing you've done and your workaround is harmless. It should work just fine either way. It would be interesting, in production code, if you still see your workaround trigger - at that point we'd want to involve the service team to see if there's something interesting happening there, instead of just focusing on the client SDK.

In the original description, I included some terraform and a description of how reproduce the issue. Has this library been tested against that use case or some comparable one?

We run three different kinds of tests: unit, live/integration and long-term.

We've added a lot of tests in all three areas based on feedback from you (and others) to see if we could get to the bottom of it. We definitely found and fixed bugs, but I never replicated the ease of your scenario despite trying a lot of variations.

However, I do have your scenario covered here: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/messaging/azservicebus/internal/stress/tests/infinite_send_and_receive.go. We generally run these tests for a week to give them space to fail and we have a few other tricks using chaos-mesh to try to induce failures earlier.

(another one inspired by some bugs: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/messaging/azservicebus/internal/stress/tests/mostly_idle_receiver.go)

@richardpark-msft richardpark-msft added needs-author-feedback Workflow: More information is needed from author to address the issue. and removed customer-response-expected labels Jun 28, 2023
@github-actions
Copy link

Hi @tkent. Thank you for opening this issue and giving us the opportunity to assist. To help our team better understand your issue and the details of your scenario please provide a response to the question asked above or the information requested above. This will help us more accurately address your issue.

@github-actions
Copy link

github-actions bot commented Jul 5, 2023

Hi @tkent, we're sending this friendly reminder because we haven't heard back from you in 7 days. We need more information about this issue to help address it. Please be sure to give us your input. If we don't hear back from you within 14 days of this comment the issue will be automatically closed. Thank you!

@github-actions github-actions bot added the no-recent-activity There has been no recent activity on this issue. label Jul 5, 2023
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 20, 2023
@github-project-automation github-project-automation bot moved this from In Progress to Done in Azure SDK for Service Bus Jul 20, 2023
@github-actions github-actions bot locked and limited conversation to collaborators Oct 18, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug This issue requires a change to an existing behavior in the product in order to be resolved. Client This issue points to a problem in the data-plane of the library. customer-reported Issues that are reported by GitHub users external to the Azure organization. needs-author-feedback Workflow: More information is needed from author to address the issue. no-recent-activity There has been no recent activity on this issue. Service Bus
Projects
Status: Done
Development

No branches or pull requests

6 participants