Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frequent request timeouts (408) #1610

Closed
skurik opened this issue Jun 10, 2020 · 24 comments
Closed

Frequent request timeouts (408) #1610

skurik opened this issue Jun 10, 2020 · 24 comments

Comments

@skurik
Copy link

skurik commented Jun 10, 2020

We recently started using Azure Cosmos DB and it became obvious we don't fully understand how to deal with some of the issues it brings.

In particular, we are observing a large number of request timeouts.

The exceptions look like this:

[
    {
        "Details": null,
        "InnerExceptions": [
            {
                "Details": null,
                "InnerExceptions": [],
                "Message": "A client transport error occurred: The request timed out while waiting for a server response. (Time: 2020-06-09T12:43:32.8643249Z, activity ID: ebaa68c7-b8fc-46a8-8fb2-9d345a4b94d2, error code: ReceiveTimeout [0x0010], base error: HRESULT 0x80131500, URI: rntbd://cdb-ms-prod-westeurope1-fd12.documents.azure.com:14023/apps/b354ae5f-004d-4332-9e8b-699797d3441b/services/c6c0736e-5b33-4ec7-9917-25318f7713b8/partitions/1d54230d-f870-44cd-affb-83e77d5fc9ba/replicas/132357606089777958p/, connection: 10.0.4.110:56578 -> 13.69.112.4:14023, payload sent: True, CPU history: (2020-06-09T12:42:32.0825005Z 22.305), (2020-06-09T12:42:42.0804253Z 18.633), (2020-06-09T12:42:52.0768108Z 21.445), (2020-06-09T12:43:02.4892878Z 40.478), (2020-06-09T12:43:29.3331157Z 97.142), (2020-06-09T12:43:32.8487003Z 98.556), CPU count: 4)",
                "StackTrace": [
                    "   at Microsoft.Azure.Documents.Rntbd.Channel.<RequestAsync>d__13.MoveNext()",
                    "--- End of stack trace from previous location where exception was thrown ---",
                    "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
                    "   at Microsoft.Azure.Documents.Rntbd.LoadBalancingPartition.<RequestAsync>d__9.MoveNext()",
                    "--- End of stack trace from previous location where exception was thrown ---",
                    "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
                    "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
                    "   at Microsoft.Azure.Documents.Rntbd.TransportClient.<InvokeStoreAsync>d__10.MoveNext()"
                ],
                "Type": "Microsoft.Azure.Documents.TransportException"
            }
        ],
        "Message": "Response status code does not indicate success: RequestTimeout (408); Substatus: 0; ActivityId: ebaa68c7-b8fc-46a8-8fb2-9d345a4b94d2; Reason: (Message: Request timed out.\r\nActivityId: ebaa68c7-b8fc-46a8-8fb2-9d345a4b94d2, Request URI: /apps/b354ae5f-004d-4332-9e8b-699797d3441b/services/c6c0736e-5b33-4ec7-9917-25318f7713b8/partitions/1d54230d-f870-44cd-affb-83e77d5fc9ba/replicas/132357606089777958p/, RequestStats: Please see CosmosDiagnostics, SDK: Windows/10.0.14393 cosmos-netstandard-sdk/3.9.0);",
        "StackTrace": [
            "   at Microsoft.Azure.Documents.Rntbd.TransportClient.<InvokeStoreAsync>d__10.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
            "   at Microsoft.Azure.Documents.ConsistencyWriter.<WritePrivateAsync>d__18.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at Microsoft.Azure.Documents.StoreResult.VerifyCanContinueOnException(DocumentClientException ex)",
            "   at Microsoft.Azure.Documents.StoreResult.CreateStoreResult(StoreResponse storeResponse, Exception responseException, Boolean requiresValidLsn, Boolean useLocalLSNBasedHeaders, Uri storePhysicalAddress)",
            "   at Microsoft.Azure.Documents.ConsistencyWriter.<WritePrivateAsync>d__18.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at Microsoft.Azure.Documents.BackoffRetryUtility`1.<ExecuteRetryAsync>d__5.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at Microsoft.Azure.Documents.ShouldRetryResult.ThrowIfDoneTrying(ExceptionDispatchInfo capturedException)",
            "   at Microsoft.Azure.Documents.BackoffRetryUtility`1.<ExecuteRetryAsync>d__5.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
            "   at Microsoft.Azure.Documents.ConsistencyWriter.<WriteAsync>d__17.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
            "   at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)",
            "   at Microsoft.Azure.Documents.ReplicatedResourceClient.<>c__DisplayClass27_0.<<InvokeAsync>b__0>d.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
            "   at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task task)",
            "   at Microsoft.Azure.Documents.RequestRetryUtility.<ProcessRequestAsync>d__2`2.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at Microsoft.Azure.Documents.ShouldRetryResult.ThrowIfDoneTrying(ExceptionDispatchInfo capturedException)",
            "   at Microsoft.Azure.Documents.RequestRetryUtility.<ProcessRequestAsync>d__2`2.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
            "   at Microsoft.Azure.Documents.StoreClient.<ProcessMessageAsync>d__19.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
            "   at Microsoft.Azure.Documents.ServerStoreModel.<ProcessMessageAsync>d__15.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
            "   at Microsoft.Azure.Cosmos.Handlers.TransportHandler.<ProcessMessageAsync>d__3.MoveNext()",
            "--- End of stack trace from previous location where exception was thrown ---",
            "   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()",
            "   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)",
            "   at Microsoft.Azure.Cosmos.Handlers.TransportHandler.<SendAsync>d__2.MoveNext()"
        ],
        "Type": "Microsoft.Azure.Cosmos.CosmosException"
    }
]

We are migrating lots of data from SQL Server to Cosmos and the access pattern is as follows:

while (!allMigrated)
{
	foreach (var batchNumber in 1..50)
	{
		var items = FetchFromSQL(count: 100)	// This takes about 2 seconds

		var writeTasks = items.Select(i =>
                {            
                    return container.CreateItemAsync(i, requestOptions: new ItemRequestOptions { EnableContentResponseOnWrite = false });
                });

                Task.WhenAll(writeTasks).ConfigureAwait(false).GetAwaiter().GetResult();
	}

	Wait(10 minutes)
}

There may be other threads writing to Cosmos at the same time, but these will typically write just a few items at a time.

We are using a single instance of CosmosClient throughout the application.

On Azure Portal, I can see we are not being throttled:

image

So my question is basically - why are the requests timeouting so often when we don't even hit the provisioned RU limit? (we currently have 11,000 RU/s in autoscale mode).

Are we using it wrong? Is there a recommended pattern for inserting batch/large amount of data at once? AllowBulkExecution is not really useful as it waits up to 1 second for a batch to fill
and there will be situations where the batch will just not fill up quickly enough (the above migrator runs only every 10 minutes).

Can request timeouts be also caused by rate throttling (but that would not make much sense as the Azure Portal shows we are not being rate-throttled).

I read through the request timeout troubleshooting guide and the only relevant points seem to be these:

  • Users sometimes see elevated latency or request timeouts because their collections are provisioned insufficiently, the back-end throttles requests, and the client retries internally. Check the portal metrics.
  • Azure Cosmos DB distributes the overall provisioned throughput evenly across physical partitions. Check portal metrics to see if the workload is encountering a hot partition key. This will cause the aggregate consumed throughput (RU/s) to be appear to be under the provisioned RUs, but a single partition consumed throughput (RU/s) will exceed the provisioned throughput.

And these two points go back to my question - how do I evaluate precisely what's the reason for the timeouts? I can try raising the provisioned RUs up until the point timeouts stop but that hardly seems like a reasonable approach.

Thank you for any insight.

@skurik
Copy link
Author

skurik commented Jun 10, 2020

One more addition or idea: How reasonable would be to

  • issue writes to Cosmos in a fire-and-forget manner (so that for the caller, this never fails)
  • and deal with errors/timeouts/retry logic asynchronously via continuations attached to the tasks returned by CreateItemAsync

?

@j82w
Copy link
Contributor

j82w commented Jun 10, 2020

  1. 408 will not be caused by throttling. This is likely caused by high CPU usage or a networking problem? Is the application running the same region as Cosmos DB instance and what is the CPU and memory for the machine?
  2. Why are you doing a blocking call GetAwaiter().GetResult() instead of using async await? This can lead to dead locks.
  3. For this scenario you could significantly increase throughput by using Batch.
  4. Bulk should still work in this scenario. Any reason you can't get all 5000 items then do the bulk operations instead of doing the outer batching loop. That would avoid the 1 second delay except for maybe the last call? Why is the 1 second wait period to long when you have 10 minutes?

@skurik
Copy link
Author

skurik commented Jun 10, 2020

Thank you @j82w .

  1. The App service with the Cosmos client and the Cosmos itself are in the same region in Azure. I didn't see an increase in CPU or memory utilization on the client. I will try to confirm this with deeper investigation.
  2. It's not ideal, but we can't use await/async right now (legacy). I understand this can deadlock, but fortunately that's not what we've been observing so far.
  3. That looks really good, I can't believe I didn't find this myself.
  4. If it was just this migration that was running, we wouldn't care about the potential 1 second wait. But as I mentioned, there are other writers in the system who write just individual items and they would be delayed by this.

@j82w
Copy link
Contributor

j82w commented Jun 10, 2020

@ealsur any suggestions?

  1. What is the CPU and memory usage on the machine?
  2. This PR might be helpful and will hopefully make the next release.
  3. Any chance you can upgrade to 3.9.1? Then include the diagnostics?

@skurik
Copy link
Author

skurik commented Jun 10, 2020

  1. I now checked, average CPU utilization is 40% with peaks up to 60%. Memory usage is 35%. However, these errors really look like server-side timeouts, don't they?
  2. Nice, will monitor this.
  3. I will upgrade.

Could there be a problem with thread exhaustion?

@j82w
Copy link
Contributor

j82w commented Jun 10, 2020

I looked at the server side logs based on the info in the exception and I don't see any timeout or other errors beside some 429s. It's possible you are hitting SNAT Port exhaustion.

@skurik
Copy link
Author

skurik commented Jun 10, 2020

It's possible you are hitting SNAT Port exhaustion.

Can this be verified somehow? Will it be shown in the diagnostics property?

@skurik
Copy link
Author

skurik commented Jun 10, 2020

Thank you @ealsur . Those articles refer to VMs, so I found this one which talks specifically about App Services, which is our case.

All of the solutions mentioned there are about changing the code, e.g.

  • use connection pooling
  • less aggressive retry logic
  • use keep-alives to reset the outbound idle timeout

However, can I somehow enforce these when using the Cosmos SDK (maybe apart from setting different retry timeouts)? We are already using a single instance of the client per application.

@skurik
Copy link
Author

skurik commented Jun 10, 2020

I looked at the server side logs based on the info in the exception and I don't see any timeout or other errors beside some 429s. It's possible you are hitting SNAT Port exhaustion.

@j82w So even when I get a 408 response, it doesn't necessarily mean that it actually came from the server? It might be the client telling me "I could not even open the connection due to SNAT port exhaustion"?

@ealsur
Copy link
Member

ealsur commented Jun 10, 2020

@skurik For App Service this one is also good: https://azure.github.io/AppService/2018/03/01/Deep-Dive-into-TCP-Connections-in-App-Service-Diagnostics.html

@j82w j82w closed this as completed Aug 11, 2020
@dan-matthews
Copy link

dan-matthews commented Sep 2, 2020

Did you ever get a resolution to this, @skurik? I am getting same :(

@j82w
Copy link
Contributor

j82w commented Sep 2, 2020

@dan-matthews please check out the request timeout troubleshooting guide.

@dan-matthews
Copy link

Thanks for feedback @j82w, I have been through that in detail already. I'm running on Linux App Service with a good partition key (document id) and I've tested in both Direct and Gateway modes, and played with idle timeouts and port re-use. I'm async in my whole architecture, use a singleton to hold my Cosmos client and I'm using .Net Core 3.1 and the latest version of the Cosmos DB SDK (3.12.0). I've also used the troubleshooter for my TCP connections in my App Service and everything is stable at 50 to 60 connections, nothing failing. The CPU on the App Service is running stable at about 5%, memory at about 40% and the RUs of the CosmosDB are peaking below 500 (it's autopilot up to 4,000). I've put logging on my CosmosDB and it seems the requests don't even get to it, because there isn't any queries running there for more than a few milliseconds (or... if it is running, it's returning quickly and getting lost). Basically, the entire architecture is just ticking over, not breaking sweat at all.

Yet, no matter what I try playing with, I still get 408 and socket timeouts on random requests. Normally at a rate of about 1 in 100. It also doesn't matter whether the App Service has just started or been running a few hours. The error is always occurring on the MoveNext of a Cosmos method - whether it's a Feed Iterator, a Stream Iterator or just trying CreateContainerIfNotExistsAsync. Here is an example of one - this hung for 1.1 minutes then crashed out with a CanceledException:

Response status code does not indicate success: RequestTimeout (408); Substatus: 0; ActivityId: fc033e9e-0cc8-45d8-8d7f-ffa258f9c7d4; Reason: (GatewayStoreClient Request Timeout. Start Time:09/02/2020 09:35:27; Total Duration:00:01:05.0137418; Http Client Timeout:00:01:05; Activity id: fc033e9e-0cc8-45d8-8d7f-ffa258f9c7d4; Inner Message: The operation was canceled.;, Request URI: /dbs/XXXX/colls/XXXX, RequestStats: , SDK: Linux/10 cosmos-netstandard-sdk/3.11.4); The operation was canceled.

Or another, this time it hung for 1.1 mins and then crashed with a SocketException:

Response status code does not indicate success: RequestTimeout (408); Substatus: 0; ActivityId: 5806f9f6-6b9f-4aa3-957e-f6cb507123e8; Reason: (GatewayStoreClient Request Timeout. Start Time:09/02/2020 10:28:33; Total Duration:00:01:05.0043387; Http Client Timeout:00:01:05; Activity id: 5806f9f6-6b9f-4aa3-957e-f6cb507123e8; Inner Message: The operation was canceled.;, Request URI: /dbs/XXXX/colls/XXXX/docs, RequestStats: , SDK: Linux/10 cosmos-netstandard-sdk/3.11.4); The operation was canceled. Unable to read data from the transport connection: Operation canceled. Operation canceled

It basically seems like I makes the request and then loses the response, so it just hangs. If you have any other ideas I'd love to hear them because I'm kinda running out of options :) I did read somewhere to change the await to a Wait() on a Task, so I tried that with no luck. I'm desperate. I'll try anything ;)

@j82w
Copy link
Contributor

j82w commented Sep 2, 2020

Are you by any chance doing a CreateContainerIfNotExistsAsync for each item operation? Or are you doing a lot of control plane operations like CreateContainerIfNotExistsAsync?

I would recommend Contacting Azure Support.

@dan-matthews
Copy link

No, I'm actually storing the container in a member variable in the singleton service, so I only ever resolve it once. The only requests going out are just simple queries. I guess I will have to contact Azure Support... appreciate the feedback though!

@shelbaz
Copy link

shelbaz commented Sep 4, 2020

Any updates? We seem to be having a similar issue

@jonatle
Copy link

jonatle commented Oct 9, 2020

All http requests support username and pw. No username="-", no password="-", password="*".

@manish-jain-1
Copy link

manish-jain-1 commented Dec 16, 2020

FYI We are running into a similar issues with Azure Function. I will open a separate issue.

@SumiranAgg
Copy link
Member

@j82w Often I get RequestTimeout errors. Below is a sample exception message:

Response status code does not indicate success: RequestTimeout (408); Substatus: 0; ActivityId: 676b0cc7-e472-4b9a-985f-190cd66cae94; Reason: (GatewayStoreClient Request Timeout. Start Time UTC:07/19/2021 10:16:09; Total Duration:65011.0795 Ms; Request Timeout 65000 Ms; Http Client Timeout:65000 Ms; Activity id: 676b0cc7-e472-4b9a-985f-190cd66cae94;)

I have followed the Troubleshooting guide but couldn't fix the issue.

Background: We are making ReadContainerAsync call as healthcheck for our service. We hit this call 30 times per min from each pod. This failure is seen at an average 1 per min overall. The frequency is not too high in comparison to total requests made. But we would like to fix it if possible.

@j82w
Copy link
Contributor

j82w commented Jul 26, 2021

@SumiranAgg do not use ReadContainerAsync as a health check. The read container is a metadata operation. The metadata operations is Cosmos DB are limited and will eventually get throttled. It is also only called once on the SDK initialization. I would recommend doing a data plane operation like ReadItemStream on a non-existing document. This will make sure you can actually connect and get a response from the container.

Regarding the RequestTimeout make sure you are using the latest SDK 3.20.1. If it's still an issue after these changes it would be best to open a support ticket..

@sjrulandch
Copy link

Thanks for feedback @j82w, I have been through that in detail already. I'm running on Linux App Service with a good partition key (document id) and I've tested in both Direct and Gateway modes, and played with idle timeouts and port re-use. I'm async in my whole architecture, use a singleton to hold my Cosmos client and I'm using .Net Core 3.1 and the latest version of the Cosmos DB SDK (3.12.0). I've also used the troubleshooter for my TCP connections in my App Service and everything is stable at 50 to 60 connections, nothing failing. The CPU on the App Service is running stable at about 5%, memory at about 40% and the RUs of the CosmosDB are peaking below 500 (it's autopilot up to 4,000). I've put logging on my CosmosDB and it seems the requests don't even get to it, because there isn't any queries running there for more than a few milliseconds (or... if it is running, it's returning quickly and getting lost). Basically, the entire architecture is just ticking over, not breaking sweat at all.

Yet, no matter what I try playing with, I still get 408 and socket timeouts on random requests. Normally at a rate of about 1 in 100. It also doesn't matter whether the App Service has just started or been running a few hours. The error is always occurring on the MoveNext of a Cosmos method - whether it's a Feed Iterator, a Stream Iterator or just trying CreateContainerIfNotExistsAsync. Here is an example of one - this hung for 1.1 minutes then crashed out with a CanceledException:

Response status code does not indicate success: RequestTimeout (408); Substatus: 0; ActivityId: fc033e9e-0cc8-45d8-8d7f-ffa258f9c7d4; Reason: (GatewayStoreClient Request Timeout. Start Time:09/02/2020 09:35:27; Total Duration:00:01:05.0137418; Http Client Timeout:00:01:05; Activity id: fc033e9e-0cc8-45d8-8d7f-ffa258f9c7d4; Inner Message: The operation was canceled.;, Request URI: /dbs/XXXX/colls/XXXX, RequestStats: , SDK: Linux/10 cosmos-netstandard-sdk/3.11.4); The operation was canceled.

Or another, this time it hung for 1.1 mins and then crashed with a SocketException:

Response status code does not indicate success: RequestTimeout (408); Substatus: 0; ActivityId: 5806f9f6-6b9f-4aa3-957e-f6cb507123e8; Reason: (GatewayStoreClient Request Timeout. Start Time:09/02/2020 10:28:33; Total Duration:00:01:05.0043387; Http Client Timeout:00:01:05; Activity id: 5806f9f6-6b9f-4aa3-957e-f6cb507123e8; Inner Message: The operation was canceled.;, Request URI: /dbs/XXXX/colls/XXXX/docs, RequestStats: , SDK: Linux/10 cosmos-netstandard-sdk/3.11.4); The operation was canceled. Unable to read data from the transport connection: Operation canceled. Operation canceled

It basically seems like I makes the request and then loses the response, so it just hangs. If you have any other ideas I'd love to hear them because I'm kinda running out of options :) I did read somewhere to change the await to a Wait() on a Task, so I tried that with no luck. I'm desperate. I'll try anything ;)

Did anyone ever figure this out? I'm seeing the same thing.

I've have a simple Azure API App that calls Cosmos DB to read a record.
I can run locally and get the cosmos record back just fine in a sec or two.
However, if I run under Azure Api App, I get 408 or Resource Temporarily Unavailable. Since it has been happening consistently for over 24 hours, I don't think that issue is 'temporary' anymore.

In my case, the api is pretty simple. I can start the service (running under a linux container/NET6) and it doesn't do anything until the web request comes in. That first request creates the CosmosClient, does a single CreateDatabaseIfNotExistsAsync and CreateContainerIfNotExistsAsync (caches the result of both; not that it matters) then errors out with one of the above.

At least one stack is showing CreateDatabaseIfNotExistsAsync and Move.Next as at the top.

This code was working fine under Azure Api App just a few days prior.

I am using direct mode.

I feel like it must be vnet related but haven't had any luck identifying what that may be. The subnet all have permit everything within the vnet and the Cosmos service enabled. The same vnet is attached to the ApiApp and the cosmos instance.

Any ideas are appreciated.

@sks4903440
Copy link

Are you by any chance doing a CreateContainerIfNotExistsAsync for each item operation? Or are you doing a lot of control plane operations like CreateContainerIfNotExistsAsync?

I would recommend Contacting Azure Support.

@j82w What is the correct way to do this if I want to do CreateContainerIfNotExistsAsync for n containers to prepare for tests ? I'm frequently getting 408 while doing this sequentially

@sjrulandch
Copy link

In my case, it was something in the Azure cloud. I spent 3+ weeks with Azure support and they never figured it out. I rebuild my my Azure resources for scripts, deployed the exact same binary and it worked. A week later, the broken/first attempt began working again.
Was very frustrated with Azure supports inability to assist me / wait until I or the problem just go away approach...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants