Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ConnectionAbortedException ---> Reading is already in progress #52324

Open
Transis-Felipe opened this issue Nov 23, 2023 · 31 comments
Open

ConnectionAbortedException ---> Reading is already in progress #52324

Transis-Felipe opened this issue Nov 23, 2023 · 31 comments
Labels
area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions feature-kestrel

Comments

@Transis-Felipe
Copy link

Transis-Felipe commented Nov 23, 2023

Describe the bug
We're frequently getting the exception below in .net 8.0.

Microsoft.AspNetCore.Connections.ConnectionAbortedException: The connection was aborted by the application.
---> System.InvalidOperationException: Reading is already in progress.
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1MessageBody.OnConsumeAsync()
--- End of inner exception stack trace ---

Further technical details
ASP.NET Core 8.0
Kestrel on Windows Server 2019

image

https://github.com/dotnet/aspnetcore/blob/main/src/Servers/Kestrel/Core/src/Internal/Http/Http1MessageBody.cs

protected override Task OnConsumeAsync()
{
    try
    {
        while (TryReadInternal(out var readResult))
        {
            AdvanceTo(readResult.Buffer.End);

            if (readResult.IsCompleted)
            {
                return Task.CompletedTask;
            }
        }
    }
    catch (BadHttpRequestException ex)
    {
        // At this point, the response has already been written, so this won't result in a 4XX response;
        // however, we still need to stop the request processing loop and log.
        _context.SetBadRequestState(ex);
        return Task.CompletedTask;
    }
    catch (InvalidOperationException ex)
    {
        var connectionAbortedException = new ConnectionAbortedException(CoreStrings.ConnectionAbortedByApplication, ex);
        _context.ReportApplicationError(connectionAbortedException);

        // Have to abort the connection because we can't finish draining the request
        _context.StopProcessingNextRequest();
        return Task.CompletedTask;
    }

    return OnConsumeAsyncAwaited();
}

One suggestion should be to implement TryMapGet to be called instead of app.MapGet on the server side.

@dotnet-issue-labeler dotnet-issue-labeler bot added the area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions label Nov 23, 2023
@amcasey
Copy link
Member

amcasey commented Nov 28, 2023

@Transis-Felipe From your description, it sounds like this behavior is new in 8.0. Which version were you on before you upgraded?

@amcasey
Copy link
Member

amcasey commented Nov 28, 2023

One suggestion should be to implement TryMapGet to be called instead of app.MapGet on the server side.

Sorry, I didn't understand this suggestion. Can you elaborate?

@amcasey
Copy link
Member

amcasey commented Nov 28, 2023

The exception is from here.

#33409 had a similar stack and was closed in favor of dotnet/runtime#53914. It would be interesting to know whether their setup sounds similar to what you're doing.

@amcasey
Copy link
Member

amcasey commented Nov 28, 2023

The stack trace doesn't give us much to go on. I'd say the next steps are probably to collect (and, ideally, share) more verbose logs and to review dotnet/runtime#53914 (which is quite old) in case the problems described there sound similar to your scenario.

@amcasey amcasey added the Needs: Author Feedback The author of this issue needs to respond in order for us to continue investigating this issue. label Nov 28, 2023
@ghost
Copy link

ghost commented Nov 28, 2023

Hi @Transis-Felipe. We have added the "Needs: Author Feedback" label to this issue, which indicates that we have an open question for you before we can take further action. This issue will be closed automatically in 7 days if we do not hear back from you by then - please feel free to re-open it if you come back to this issue after that time.

@Transis-Felipe
Copy link
Author

One suggestion should be to implement TryMapGet to be called instead of app.MapGet on the server side.

Sorry, I didn't understand this suggestion. Can you elaborate?

Hi @amcasey ,

TryMapGet to not generate ReportApplicationError when there's an InvalidOperationException

@ghost ghost added Needs: Attention 👋 This issue needs the attention of a contributor, typically because the OP has provided an update. and removed Needs: Author Feedback The author of this issue needs to respond in order for us to continue investigating this issue. labels Nov 29, 2023
@Transis-Felipe
Copy link
Author

Transis-Felipe commented Nov 29, 2023

@Transis-Felipe From your description, it sounds like this behavior is new in 8.0. Which version were you on before you upgraded?

Before .net 8 I was using .net 7, but the error was the same.
I think it's a zero day error, because I see people talking of it in .net core 3

@amcasey
Copy link
Member

amcasey commented Nov 29, 2023

@Transis-Felipe From dotnet/runtime#53914, it sounds like there are corner cases where subtle buffering behavior can cause this error to be reported. Have you reviewed that issue to see whether any of the practices describe there sound like what your app is doing?

Where are you seeing people talking about encountering this issue in .net core 3? I haven't been able to find any reports other than the ones I linked above.

@Transis-Felipe
Copy link
Author

Transis-Felipe commented Nov 30, 2023

Where are you seeing people talking about encountering this issue in .net core 3? I haven't been able to find any reports other than the ones I linked above.

#17062

#14727

@Transis-Felipe
Copy link
Author

Have you reviewed that issue to see whether any of the practices describe there sound like what your app is doing?

I don't have access to client apps that consumes this app.

@amcasey
Copy link
Member

amcasey commented Nov 30, 2023

Have you reviewed that issue to see whether any of the practices describe there sound like what your app is doing?

I don't have access to client apps that consumes this app.

A lot of the comments I noticed in that issue were about buffering on the server side, so I wouldn't have thought you'd need access to the client apps. Unless you're saying you don't have a client that exercise your server to validate these things?

@Transis-Felipe
Copy link
Author

A lot of the comments I noticed in that issue were about buffering on the server side, so I wouldn't have thought you'd need access to the client apps. Unless you're saying you don't have a client that exercise your server to validate these things?

Yep, they run for their own

@amcasey
Copy link
Member

amcasey commented Dec 1, 2023

Does that mean you're unable to debug the server or collect additional server logs?

@Transis-Felipe
Copy link
Author

Does that mean you're unable to debug the server or collect additional server logs?

I can't debug, but I can collect additional logs.

@amcasey
Copy link
Member

amcasey commented Dec 4, 2023

I don't see any logs that directly track the _isReading state, but it would be interesting to see debug-level logs for an affected request.

@amcasey
Copy link
Member

amcasey commented Dec 4, 2023

Do the problematic requests include Expect: 100-continue as in dotnet/runtime#53914?

@Transis-Felipe
Copy link
Author

Do the problematic requests include Expect: 100-continue as in dotnet/runtime#53914?

In the server side I can see this ? I don't have access to or how client side do the access.

@Transis-Felipe
Copy link
Author

I don't see any logs that directly track the _isReading state, but it would be interesting to see debug-level logs for an affected request.

How can I grab this ?

@amcasey
Copy link
Member

amcasey commented Dec 5, 2023

I usually update my appsettings.json, but there are a number of options, depending on your setup.

@amcasey amcasey removed the Needs: Attention 👋 This issue needs the attention of a contributor, typically because the OP has provided an update. label Dec 5, 2023
@Transis-Felipe
Copy link
Author

Transis-Felipe commented Dec 6, 2023

I usually update my appsettings.json, but there are a number of options, depending on your setup.

Wow, changing appsettings to debug produces a lot of info...

image

Any chance to generate logs only for errors ?

@amcasey
Copy link
Member

amcasey commented Dec 6, 2023

Wow, changing appsettings to debug produces a lot of info...

Yep! We don't recommend enabling it in production unless you're trying to diagnose a particular problem. There are ways to filter to only particular log messages to reduce the volume but the problem here is that we don't know what to filter to - basically, we're just trying to collect as much info as possible about the period before a failure.

A rolling log would be convenient but, unfortunately, I don't believe anything like that is built in. It seems possible that someone has built a nuget package for that, but I don't know of one off the top of my head.

@Transis-Felipe
Copy link
Author

I will try to use debug for sometime to see if we catch the error.

@amcasey
Copy link
Member

amcasey commented Dec 6, 2023

Do the problematic requests include Expect: 100-continue as in dotnet/runtime#53914?

In the server side I can see this ? I don't have access to or how client side do the access.

Sorry, I missed this before. The server can certainly dump request headers, but it makes the output much more verbose. We could get a pretty good idea from the response status codes though - are you seeing any 100s (i.e. on one of the "request finished" lines)?

@amcasey
Copy link
Member

amcasey commented Dec 6, 2023

That's weird - I wouldn't expect the client to send another request on the same connection after the FIN.

@amcasey
Copy link
Member

amcasey commented Dec 6, 2023

You'd have to enable connection logging or use something like wireshark to see what was actually in the bad request. Also, this doesn't appear to have lead to the fail message from your original report?

Edit: It looks like this variant of the error message (which would normally include the invalid line) indicates that the message was truncated.

@Transis-Felipe
Copy link
Author

You'd have to enable connection logging or use something like wireshark to see what was actually in the bad request. Also, this doesn't appear to have lead to the fail message from your original report?

No, it's another error that I catch. I will exclude to not confuse.

@Transis-Felipe
Copy link
Author

Transis-Felipe commented Dec 7, 2023

Sorry to ask you for this, but I'm not finding a way to redirect output to a file
I'm not finding a way using the web.config or appsettings.json.

The console it limited to 9999 lines, so it loses results quickly

@amcasey
Copy link
Member

amcasey commented Dec 8, 2023

Unfortunately, I don't think there's built-in support for that - I usually just redirect stdout to a file. There are third party extensions that give you more control, but there isn't a particular one I can recommend off the top of my head.

@j-hudecek
Copy link

Hi, we're seeing a similar issue on .net 6.0.25. Ubuntu 22, x64, Kestrel is behind Apache with http2.

Dec 22 12:33:57 h102 service[4714]: warn: Microsoft.AspNetCore.Server.Kestrel[0]
Dec 22 12:33:57 h102 service[4714]:       Connection processing ended abnormally.
Dec 22 12:33:57 h102 service[4714]:       System.InvalidOperationException: Reading is already in progress.
Dec 22 12:33:57 h102 service[4714]:          at System.IO.Pipelines.ThrowHelper.ThrowInvalidOperationException_AlreadyReading()
Dec 22 12:33:57 h102 service[4714]:          at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
Dec 22 12:33:57 h102 service[4714]:          at System.IO.Pipelines.Pipe.ReadAsync(CancellationToken token)
Dec 22 12:33:57 h102 service[4714]:          at System.IO.Pipelines.Pipe.DefaultPipeReader.ReadAsync(CancellationToken cancellationToken)
Dec 22 12:33:57 h102 service[4714]:          at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1Connection.BeginRead(ValueTask`1& awaitable)
Dec 22 12:33:57 h102 service[4714]:          at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequests[TContext](IHttpApplication`1 application)
Dec 22 12:33:57 h102 service[4714]:          at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpProtocol.ProcessRequestsAsync[TContext](IHttpApplication`1 application)

@amcasey
Copy link
Member

amcasey commented Jan 16, 2024

@j-hudecek Did these suggestions help at all?

@j-hudecek
Copy link

Do you mean setting higher Expect100ContinueTimeout? I'll try it. We haven't seen the error since, though, so it will take a while to know if it helped

@dotnet-policy-service dotnet-policy-service bot added the pending-ci-rerun When assigned to a PR indicates that the CI checks should be rerun label Feb 6, 2024
@wtgodbe wtgodbe removed the pending-ci-rerun When assigned to a PR indicates that the CI checks should be rerun label Feb 6, 2024
@dotnet-policy-service dotnet-policy-service bot added the pending-ci-rerun When assigned to a PR indicates that the CI checks should be rerun label Feb 6, 2024
@wtgodbe wtgodbe removed the pending-ci-rerun When assigned to a PR indicates that the CI checks should be rerun label Feb 13, 2024
@dotnet dotnet deleted a comment from dotnet-policy-service bot Feb 13, 2024
@dotnet dotnet deleted a comment from dotnet-policy-service bot Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions feature-kestrel
Projects
None yet
Development

No branches or pull requests

4 participants