Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add partial packet detection and fixup #2714

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

Wraith2
Copy link
Contributor

@Wraith2 Wraith2 commented Jul 24, 2024

Split out from #2608 per discussion detailed in #2608 (comment)

Adds packet multiplexer and covering tests.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Jul 25, 2024

I've added comments to the Packet class as requested. The CI was green apart from some ubuntu legs which timed out, many other ubuntu legs succeeded so I don't see any direct inference on from that.

Ready for review @David-Engel @saurabh500 @cheenamalhotra

@Wraith2 Wraith2 marked this pull request as ready for review July 25, 2024 18:20
@saurabh500 saurabh500 self-requested a review July 29, 2024 19:29
@saurabh500
Copy link
Contributor

@Wraith2 We are reviewing this and hope to get faster traction towards EOW.
Wanted to give an update, instead of maintaining radio silence.

cc @VladimirReshetnikov

@cheenamalhotra
Copy link
Member

cheenamalhotra commented Aug 6, 2024

Pasting test failure for reference:

    Failed Microsoft.Data.SqlClient.ManualTesting.Tests.AsyncCancelledConnectionsTest.CancelAsyncConnections [2 m 38 s]
EXEC : error Message:  [/mnt/vss/_work/1/s/build.proj]
     Assert.Empty() Failure: Collection was not empty
  Collection: ["Microsoft.Data.SqlClient.SqlException (0x80131904)"···]
    Stack Trace:
       at Microsoft.Data.SqlClient.ManualTesting.Tests.AsyncCancelledConnectionsTest.RunCancelAsyncConnections(SqlConnectionStringBuilder connectionStringBuilder) in /_/src/Microsoft.Data.SqlClient/tests/ManualTests/SQL/AsyncTest/AsyncCancelledConnectionsTest.cs:line 66
     at Microsoft.Data.SqlClient.ManualTesting.Tests.AsyncCancelledConnectionsTest.CancelAsyncConnections() in /_/src/Microsoft.Data.SqlClient/tests/ManualTests/SQL/AsyncTest/AsyncCancelledConnectionsTest.cs:line 31
     at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor)
     at System.Reflection.MethodBaseInvoker.InvokeWithNoArgs(Object obj, BindingFlags invokeAttr)

    Standard Output Messages:
   00:00:05.8665447 True Started:8 Done:0 InFlight:8 RowsRead:39 ResultRead:3 PoisonedEnded:1 nonPoisonedExceptions:0 PoisonedCleanupExceptions:0 Count:0 Found:0
   00:00:10.8624529 True Started:12 Done:0 InFlight:12 RowsRead:832 ResultRead:64 PoisonedEnded:6 nonPoisonedExceptions:6 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:15.8646242 True Started:17 Done:0 InFlight:17 RowsRead:2327 ResultRead:179 PoisonedEnded:11 nonPoisonedExceptions:9 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:20.8677772 True Started:42 Done:6 InFlight:36 RowsRead:4810 ResultRead:370 PoisonedEnded:18 nonPoisonedExceptions:14 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:25.8731904 True Started:71 Done:12 InFlight:59 RowsRead:9126 ResultRead:702 PoisonedEnded:30 nonPoisonedExceptions:29 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:30.8714979 True Started:77 Done:14 InFlight:63 RowsRead:12207 ResultRead:939 PoisonedEnded:38 nonPoisonedExceptions:36 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:35.0004685 True Started:86 Done:25 InFlight:61 RowsRead:17173 ResultRead:1321 PoisonedEnded:49 nonPoisonedExceptions:43 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:39.9987443 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:44.9985663 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:49.9982022 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:54.9982968 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:00:59.9996354 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:04.9991460 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:09.9983868 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:14.9975925 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:19.9977701 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:25.0122289 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:30.0025709 True Started:97 Done:64 InFlight:33 RowsRead:31798 ResultRead:2446 PoisonedEnded:64 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:35.0024237 True Started:98 Done:65 InFlight:33 RowsRead:32344 ResultRead:2488 PoisonedEnded:65 nonPoisonedExceptions:62 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:40.0025057 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:45.0002633 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:49.9986071 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:54.9998736 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:01:59.9957745 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:02:04.9985369 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:02:09.9982641 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:02:14.9983408 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:02:19.9988637 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:02:25.0003251 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:02:29.9988943 True Started:100 Done:98 InFlight:2 RowsRead:50297 ResultRead:3869 PoisonedEnded:98 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:02:35.0000752 True Started:100 Done:99 InFlight:1 RowsRead:50830 ResultRead:3910 PoisonedEnded:99 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   00:02:38.0752114 True Started:100 Done:100 InFlight:0 RowsRead:51376 ResultRead:3952 PoisonedEnded:100 nonPoisonedExceptions:63 PoisonedCleanupExceptions:0 Count:1 Found:0
   Microsoft.Data.SqlClient.SqlException (0x80131904): A severe error occurred on the current command.  The results, if any, should be discarded.
      at Microsoft.Data.SqlClient.SqlConnection.OnError(SqlE

This test should be looked at carefully.

It failed on Ubuntu with .NET 6 and 8 , and also hung up on Windows when ran with Managed SNI, link to logs 1 link to logs 2.
In this use case, multiple parallel async read operations are being performed, which means connection isolation should be intact while cancellation occurs in between, but it doesn't seem to be happening.

@Wraith2 can you confirm if this is something you're able to repro in Windows with Managed SNI? Please make sure config file is configured to enable Managed SNI on Windows.

{
// Do nothing with callback if closed or broken and error not 0 - callback can occur
// after connection has been closed. PROBLEM IN NETLIB - DESIGN FLAW.
return;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add a Debug Assert here and check if this is taking any hit?

Copy link
Member

@cheenamalhotra cheenamalhotra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test needs to be fixed, before reviewing any further.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 6, 2024

Isn't this the set of tests that @David-Engel pointed out in #2608 (comment) ? If so we discussed it at length on the teams call. I don't believe that those tests are reliable.

Setup a breakpoint or Debug.WriteLine where an exception is added to the state object and run the test. You should find that an exception is always added to the state object but that the test will usually succeed. That should not be possible, an exception if added should be thrown. The test is missing failures and if that's the case then the test is unreliable.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 6, 2024

When you work past the terrible code in SNITCPHandle and make the test run for long enough it settles into a steady state where it can't reach the end. There is no indication why yet.

00:05:45.3696502 True Started:97 Done:88 InFlight:9 RowsRead:241216 ResultRead:3769 PoisonedEnded:88 nonPoisonedExceptions:0 PoisonedCleanupExceptions:0 Count:0 Found:0
00:05:50.3638134 True Started:97 Done:88 InFlight:9 RowsRead:241216 ResultRead:3769 PoisonedEnded:88 nonPoisonedExceptions:0 PoisonedCleanupExceptions:0 Count:0 Found:0

those 9 in flight items just don't seem to complete but i don't know why.
This is going to need your help from MS side to identify what's going on here.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 6, 2024

After a few more hours investigation I know what the problem is but I have no clue what change has caused it.

In SqlDataReader when an async method is called we use a context object to contain some state and pass that context object to all the async methods that are used to implement the async read machinery. Part of this state is the TaskCompletionSource.
When running the test CancelAsyncConnections many connections are opened and then SqlCommand.Cancel is called after a brief timed wait. If the timing of the cancel operation is exact then an async operation can be in progress and between packets at the time when the cancellation is executed.
This causes the thread awaiting the async operation to wait indefinitely for a task that will never be completed. This is what causes the stuck threads. The threads as stuck so the test can never complete and it then times out.

What I don't understand is how cancel is supposed to work. I'm unable to run the tests in native sni mode because the native sni can't be initialized (can't find the sni dll). So I can't compare the managed to unmanaged implementations here. I don't believe that I have made any change that should affect cancellation. I have verified that there are no partial packets in the state objects when the async tasks get stuck.

I don't understand how async cancellation is supposed to work at all.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 10, 2024

Can someone with CI access rerun the failed legs? the failures are random or CI resources not being available as far as i can tell.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 10, 2024

The current failures are interesting. They're in the test that was failing before but they new ones are only detected because i made the test more accurate.

  [xUnit.net 00:06:40.39]     Microsoft.Data.SqlClient.ManualTesting.Tests.AsyncCancelledConnectionsTest.CancelAsyncConnections [FAIL]
  [xUnit.net 00:06:40.39]       Assert.Empty() Failure: Collection was not empty
  [xUnit.net 00:06:40.39]       Collection: ["Microsoft.Data.SqlClient.SqlException (0x80131904)"···]
  [xUnit.net 00:06:40.39]       Stack Trace:
  [xUnit.net 00:06:40.39]         /_/src/Microsoft.Data.SqlClient/tests/ManualTests/SQL/AsyncTest/AsyncCancelledConnectionsTest.cs(71,0): at Microsoft.Data.SqlClient.ManualTesting.Tests.AsyncCancelledConnectionsTest.RunCancelAsyncConnections(SqlConnectionStringBuilder connectionStringBuilder)
  [xUnit.net 00:06:40.39]         /_/src/Microsoft.Data.SqlClient/tests/ManualTests/SQL/AsyncTest/AsyncCancelledConnectionsTest.cs(32,0): at Microsoft.Data.SqlClient.ManualTesting.Tests.AsyncCancelledConnectionsTest.CancelAsyncConnections()
  [xUnit.net 00:06:40.39]            at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor)
  [xUnit.net 00:06:40.39]            at System.Reflection.MethodBaseInvoker.InvokeWithNoArgs(Object obj, BindingFlags invokeAttr)
  [xUnit.net 00:06:40.39]       Output:
  [xUnit.net 00:06:40.39]         00:00:05.4318805 True Started:21 Done:0 InFlight:21 RowsRead:117 ResultRead:9 PoisonedEnded:4 nonPoisonedExceptions:2 PoisonedCleanupExceptions:0 Count:1 Found:0
  [xUnit.net 00:06:40.39]         00:00:10.4374767 True Started:25 Done:0 InFlight:25 RowsRead:1469 ResultRead:113 PoisonedEnded:11 nonPoisonedExceptions:9 PoisonedCleanupExceptions:0 Count:1 Found:0
  [xUnit.net 00:06:40.39]         00:00:15.4529038 True Started:31 Done:1 InFlight:30 RowsRead:4732 ResultRead:364 PoisonedEnded:14 nonPoisonedExceptions:13 PoisonedCleanupExceptions:0 Count:1 Found:0
  [xUnit.net 00:06:40.39]         00:00:20.4568918 True Started:67 Done:12 InFlight:55 RowsRead:7852 ResultRead:604 PoisonedEnded:28 nonPoisonedExceptions:21 PoisonedCleanupExceptions:0 Count:1 Found:0
  [xUnit.net 00:06:40.39]         00:00:24.9990795 True Started:91 Done:32 InFlight:59 RowsRead:19955 ResultRead:1535 PoisonedEnded:43 nonPoisonedExceptions:35 PoisonedCleanupExceptions:0 Count:1 Found:0
  [xUnit.net 00:06:40.39]         00:00:28.1341854 True Started:100 Done:100 InFlight:0 RowsRead:52273 ResultRead:4021 PoisonedEnded:100 nonPoisonedExceptions:44 PoisonedCleanupExceptions:0 Count:1 Found:0
  [xUnit.net 00:06:40.39]         Microsoft.Data.SqlClient.SqlException (0x80131904): A severe error occurred on the current command.  The results, if any, should be discarded.
  [xUnit.net 00:06:40.39]            at Microsoft.Data.SqlClient.SqlConnection.OnError(SqlE
    Failed Microsoft.Data.SqlClient.ManualTesting.Tests.AsyncCancelledConnectionsTest.CancelAsyncConnections [28 s]
EXEC : error Message:  [/mnt/vss/_work/1/s/build.proj]
     Assert.Empty() Failure: Collection was not empty

The previous version of the test accepted any exception when it was expecting a cancellation exception. It was passing on netfx with my previous changes because timeout exceptions were being thrown. I judged that accepting a timeout when we were supposed to be testing whether cancellation had occurred was not correct.

If we retained the previous version of the test then everything would have passed cleanly. In the current situation since the test completed correctly without hanging the result is equivalent to what we would have experienced in all test runs in the past, all started threads that we expected to be cancelled exited with an exception.

[edit]
I ran that single test locally in Net6 managed SNI mode using the vs "Run Until Failure" option. This runs the test up to 1000 times sequentially and stops if it fails. It completed 1000 runs successfully.

@David-Engel
Copy link
Contributor

@Wraith2 You might be banging your head against an unrelated issue in the driver. IIRC, the test was only introduced to ensure we don't regress "The MARS TDS header contained errors." issue. (The test code came from the repro.)

If you isolate your test changes and run them against main code, does it still fail? Yes, the correct exception is probably "Operation cancelled by user." where the exception is being caught. But if it's unrelated to your other changes, I would leave that part of the test as it was and file a new issue with repro code. As it is, it's unclear if and how this behavior is impacting users and I wouldn't hold up your perf improvements for it.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 12, 2024

There was definitely a real problem. The results differed between main and my branch. I've solved that issue now and the current state is that we're seeing a real failure because I've made the test more sensitive. I think it's probably safe to lower the sensitivity of the test again now because the new test that I've added covers the specific scenario in the multiplexer that I had missed and everything else is pass. I'll try that and see how the CI likes it.

I think the current state on this branch is that it is as stable as live. We need to have confidence that this set of changes is correct before we can merge it. It's high risk and high complexity code. Even understanding it very deeply it has taken me a week to actively debug a very important behaviour change that I missed.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 12, 2024

Can someone re-run the failed legs? the only failing test is something to do with event counters which I've been no-where near.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 12, 2024

The failing test is EventCounter_ReclaimedConnectionsCounter_Functional. It's doing something with GC specific to net6. It's failing sporadically on net6 managed sni runs but not deterministically. I can't make it fail locally to trace what might be happening.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 19, 2024

Any thoughts?

@David-Engel
Copy link
Contributor

David-Engel commented Aug 26, 2024

I'm not seeing the failures you mentioned in EventCounter_ReclaimedConnectionsCounter_Functional [in the CI results]. I mainly see fairly consistent failures of CancelAsyncConnections on Linux. It seems to pass on Windows managed SNI, so there might be something that is Linux/Ubuntu network specific. Can you run/debug the test against a local WSL or Docker instance?

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 26, 2024

If i click through the failure i get to this page https://sqlclientdrivers.visualstudio.com/public/_build/results?buildId=95784&view=ms.vss-test-web.build-test-results-tab
image

The cancel tests are passing now, those failed in the previous runs but not the current ones.

@David-Engel
Copy link
Contributor

I think there is something wrong with the Tests tab. I don't usually reference it. I scroll down the summary tab to see what jobs had failures:

image

Then drill into the job and the task that failed to see the log:

image

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 27, 2024

If it's AsyncCancelledConnectionsTest again then there isn't anything further I can do. That test is multithreaded and timing dependent. I've traced the individual packets through the entire call stack. I've run it for 1000 iterations successfully after fixing a reproducible error in it. If someone can isolate a reproducible problem from it then i'll investigate.

@David-Engel
Copy link
Contributor

I chatted with @saurabh500 and I just want to add that this is definitely something we all want to see get merged. It'll just take someone finding time (could take a few days dedicated time) to get their head wrapped around the new code and be able to help repro/debug to find the issue.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Aug 28, 2024

I'm happy to make myself available to talk through the code with anyone that needs it.

@saurabh500
Copy link
Contributor

@Wraith2 and @David-Engel I was looking at the lifecycle of the snapshots and something that stood out in NetCore vs NetFx is that SqlDataReader for NetCore is storing the cached snapshot with the SqlInternalConnectionTds which is a shared resource among all the SqlDataReader(s) running on a MARS connection.

private void PrepareAsyncInvocation(bool useSnapshot)
{
    // if there is already a snapshot, then the previous async command
    // completed with exception or cancellation.  We need to continue
    // with the old snapshot.
    if (useSnapshot)
    {
        Debug.Assert(!_stateObj._asyncReadWithoutSnapshot, "Can't prepare async invocation with snapshot if doing async without snapshots");

        if (_snapshot == null)
        {
            if (_connection?.InnerConnection is SqlInternalConnection sqlInternalConnection)
            {
                _snapshot = Interlocked.Exchange(ref sqlInternalConnection.CachedDataReaderSnapshot, null) ?? new Snapshot();
            }
            else
            {
                _snapshot = new Snapshot();
            }

This means that we are saving the reader snapshot on the shared resource, which can be overwritten by any other reader.
Also a reader can receive another reader's snapshot.

@Wraith2 have you had a chance to pursue this line of investigation for hanging test?

I wonder if the timing is causing the wrong cached snapshot to be provided to a SqlDataReader, causing data corruption and likely causing a hang.

@saurabh500
Copy link
Contributor

SqlInternalConnection.cs


#if NET6_0_OR_GREATER
        internal SqlCommand.ExecuteReaderAsyncCallContext CachedCommandExecuteReaderAsyncContext;
        internal SqlCommand.ExecuteNonQueryAsyncCallContext CachedCommandExecuteNonQueryAsyncContext;
        internal SqlCommand.ExecuteXmlReaderAsyncCallContext CachedCommandExecuteXmlReaderAsyncContext;

        internal SqlDataReader.Snapshot CachedDataReaderSnapshot;
        internal SqlDataReader.IsDBNullAsyncCallContext CachedDataReaderIsDBNullContext;
        internal SqlDataReader.ReadAsyncCallContext CachedDataReaderReadAsyncContext;
#endif

@saurabh500
Copy link
Contributor

saurabh500 commented Sep 4, 2024

@Wraith2 I see that you had made the changes in the first place. Can you try another PR where you remove the storage of these contexts and snapshots on SqlInternalConnection and with the multiplexing change, try to see if this solves the problem.

Also, I am Happy to be told that my theory is wrong, but I would like to understand how in MARS cases, the shared Cached contexts on InternalConnection is a safe design choice.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Nov 1, 2024

Do these details help? How can I help narrow it down, if not?

In each case where something goes wrong we need to know how we got there. In those dumps you can see that a large part of the packet data is filled with FF, that indicates that the particular buffer Is likely part of a multiple packets that got stuck together then the multiplexer pulled them apart filling the unused space with f's to help debugging.

The problem is that knowing how we got to any particular place is complicated and the more test code you add the more you disrupt timings which change the observed behaviour. Ideally we'd be able to dump all packets at on the connection we're interested in as they leave the network layer for the tds layer but when we're running 100 connections this is both disruptive and very hard to implement. Each thread is using 350+ packets in just this short test.

Given a sequence of 8K buffers we can, in theory, just run them through the test harness and see if the multiplexer deconstructs and reconstructs them successfully. Unfortunately there is another class of problem which the test harness can't cover, errors caused by internal behaviour of the real library in consuming packets. The test harness runs packets. The real library uses the data inside the packets to run.

The data from your example:

04 01 00 40 00 38 F8 00 00 
unused buffer:
00 00 06 00 00 00 02 00 00 00 0A 00 00 00 05 00 
00 00 01 00 00 00 08 00 00 00 E4 0B 00 00 00 DF 
00 00 00 00 FE 04 03 00 00 00 FD 02 00 C1 00 00 
00 00 00 00 00 00 00 04 01 00 15 00 38 01 00 FD 
20 00 FD 00 00 00 00 00 00 00 00 00

looks like 2 packets. The 04 00 and 04 01 are the indicators i'm looking for, they've followed by the length. So it looks like the current packet header is 04 01 00 40 00 38 F8 00 meaning 0x40 bytes of data, which is 64, so 56 bytes of payload meaning that

`04 01 00 40 00 38 F8 00 00 
unused buffer:
00 00 06 00 00 00 02 00 00 00 0A 00 00 00 05 00 
00 00 01 00 00 00 08 00 00 00 E4 0B 00 00 00 DF 
00 00 00 00 FE 04 03 00 00 00 FD 02 00 C1 00 00 
00 00 00 00 00 00 00

is one complete packet, and the buffer also contains another packet header 04 01 00 15 00 38 01 00 0x15 length is 21, -8 means 13 bytes of payload which is

04 01 00 15 00 38 01 00 FD 
20 00 FD 00 00 00 00 00 00 00 00 00

after this it's all the F's which shows that we've got the right bits of data because all the non-right bits were blanked.

So the data is correctly accounted for but the current packet buffer should have been split out into two packets. The data inside the packet we're currently reading is the thing that triggered the IsValidTdsToken failure but it doesn't look like the failure is caused by packet reconstruction logic because the packets contents and lengths add up correctly.

How do we fix it? I need a reproduction I can run quickly to the point of failure multiple times so I can add in debugging statements and breakpoints to poke at it.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Nov 1, 2024

In this specific case I'd add something to console print, wait for attached debugger, break, whatever works and try to identify how a partial packet from PartialPacket with ContainsCompletePacket=true and DataLength>RequiredLength ended up being used in a call to SetBuffer because it shouldn't have been without passing through the multiplexer.

@@ -3533,7 +3533,7 @@ private TdsOperationStatus TryNextResult(out bool more)
if (result != TdsOperationStatus.Done)
{
more = false;
return TdsOperationStatus.Done;
return result;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be fixed in main with a separete PR?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This originally came from my Wraith2#5 PR. I didn't spot it on the original OperationStatus PR, so decided to submit it for rollup into this one. I'm happy to push it into main directly too though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if it has any implications if left in main

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing in the current codebase needs to be able to tell the difference between Done and NeedMoreData. That will come in part3 of this series of PRs. So this can be here, or in main as a separate PR, doesn't really matter. It just seemed easiest to incorporate it into this change when @edwardneal pr'ed it into my repo.

@Wraith2
Copy link
Contributor Author

Wraith2 commented Nov 4, 2024

I've rebased to main to fix the conflicts. Locally the net6 tests won't build but I'm hoping the CI has no problems with them. I've also added a try-catch around pending read increment/decrement which @MichelZ has found causing him problems during testing, other places in the code use a try catch pattern with this variable so it feels natural to add it in the location I have.

I ran tests on windows with native SNI over the weekend and didn't enounter any test failures that trace back to partial packets. I had a small number of test failures where the cancellation had not been sent by the time the test completed (_attentionSending and _attentionSent were false). Those are a timing issue not to do with the change I've made.

@cheenamalhotra and @saurabh500 can you review please?

Given the timescales for the 6.0 release being fixed and it being an important LTS milestone I no longer think that this change should be part of that release. I think that the 6.0 should go out without this change ( and the required following change to enable fast async string) and that the versions after that should include these changes.

Opinions?

@MichelZ
Copy link
Contributor

MichelZ commented Nov 5, 2024

On the bright side, I've not seen any more test failures with the latest code. However, attempting to stress the async cancel path, I modified CancellationTokenIsRespected_ReadAsync:

        public static async Task CancellationTokenIsRespected_ReadAsync()
        {
            const string longRunningQuery = @"
with TenRows as (select Value from (values (1), (2), (3), (4), (5), (6), (7), (8), (9), (10)) as TenRows (Value)),
    ThousandRows as (select A.Value as A, B.Value as B, C.Value as C from TenRows as A, TenRows as B, TenRows as C)
select *
from ThousandRows as A, ThousandRows as B, ThousandRows as C;";

            for (int i = 1; i <= 1000; i++)
            {
                using (var connection = new SqlConnection(DataTestUtility.TCPConnectionString))
                {
                    await connection.OpenAsync();

                    Stopwatch stopwatch = Stopwatch.StartNew();
                    try
                    {
                        using (var source = new CancellationTokenSource())
                        {
                            source.CancelAfter(i);
                            using (var command = new SqlCommand(longRunningQuery, connection))
                            using (var reader = await command.ExecuteReaderAsync(source.Token))
                            {
                                while (await reader.ReadAsync(source.Token))
                                {
                                }
                            }
                        }
                        Assert.True(false, "Cancellation did not trigger exception.");
                    }
                    catch (InvalidOperationException ioe)
                    {
                        Assert.Contains("Operation cancelled by user", ioe.Message);
                    }
                    catch (OperationCanceledException oce)
                    {
                        Assert.Contains("A task was canceled", oce.Message);
                    }
                    catch (SqlException se)
                    {
                        Assert.Contains("Operation cancelled by user", se.Message);
                    }

                    Assert.True(stopwatch.ElapsedMilliseconds < 10000, "Cancellation did not trigger on time.");
                }
            }
        }

I ran this quite a few times now on multiple machines (Win, 16 cores/32 threads, Managed SNI, NET8 / Linux, 2 cores, NET8), and while I get some cancellations did not trigger on time, I have nerver seen another failure from this.
I also count the timing issues and do not abort the test, so tests with timing issues ran to completion

@MichelZ
Copy link
Contributor

MichelZ commented Nov 5, 2024

I've removed the timing Assert and let it "run until failure", I'm currently on iteration 13 without failures (this time with native SNI)

image

@David-Engel
Copy link
Contributor

I've run this a few more times on my systems and the results look good to me. I'm tempted to say put it in 6.0 because I really want the async perf improvement in there, but that's probably not wise. 😄

@Wraith2
Copy link
Contributor Author

Wraith2 commented Nov 16, 2024

It depends on the time remaining. The next PR is substantially easier and smaller. I thought release date was going to coincide with netconf but that's gone already.

@MichelZ
Copy link
Contributor

MichelZ commented Nov 16, 2024

According to the 6.0.0 Milestone it's currently planned for Dec 9th

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants