Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bugfix] Sequentially issue read requests, process results concurrently #436

Merged
merged 1 commit into from
May 22, 2021

Conversation

puellanivis
Copy link
Collaborator

Originally, it was assumed that we could issue read requests in any arbitrary order, and it would execute the same way. This holds for many cases, but fails for cases where the server might reset the file once an io.EOF has been sent back to the client.

So, this takes care to issue requests sequentially, and then handle the results as the results come back. There is a bit of a change as well, since we can no longer just read in a chunk until it is completely read, we are stuck having to assume that each chunk read will be read to completion, exclusive from returning a short read and an EOF at the following read. This is only guaranteed per the standard for a regular file. Fortunately, we’re already stating the remote file to get the file length, so we just additionally check the permissions to make sure it’s a regular file.

I brought in the pool.go changes from the filexfer implementation PR, so that I have resChanPool as this was useful for this PR.

@puellanivis
Copy link
Collaborator Author

name                        old time/op    new time/op     delta
WriteTo1k-12                  5.97ms ± 1%     5.83ms ± 3%   -2.40%  (p=0.001 n=10+10)
WriteTo16k-12                 6.16ms ± 3%     6.20ms ± 6%     ~     (p=0.529 n=10+10)
WriteTo32k-12                 6.43ms ± 3%     6.36ms ± 3%     ~     (p=0.165 n=10+10)
WriteTo128k-12                6.64ms ± 2%     6.53ms ± 3%   -1.70%  (p=0.043 n=10+10)
WriteTo512k-12                6.69ms ± 2%     6.59ms ± 3%   -1.47%  (p=0.015 n=10+10)
WriteTo1MiB-12                6.74ms ± 3%     6.67ms ± 3%     ~     (p=0.190 n=10+10)
WriteTo4MiB-12                6.85ms ± 1%     6.72ms ± 4%     ~     (p=0.063 n=10+10)
WriteTo4MiBDelay10Msec-12      108ms ± 0%      100ms ± 0%   -6.66%  (p=0.000 n=9+10)
WriteTo4MiBDelay50Msec-12      511ms ± 0%      462ms ± 0%   -9.48%  (p=0.000 n=10+10)
WriteTo4MiBDelay150Msec-12     1.51s ± 0%      1.37s ± 0%   -9.65%  (p=0.000 n=10+10)

name                        old speed      new speed       delta
WriteTo1k-12                1.76GB/s ± 1%   1.80GB/s ± 3%   +2.48%  (p=0.001 n=10+10)
WriteTo16k-12               1.70GB/s ± 3%   1.69GB/s ± 6%     ~     (p=0.529 n=10+10)
WriteTo32k-12               1.63GB/s ± 3%   1.65GB/s ± 3%     ~     (p=0.165 n=10+10)
WriteTo128k-12              1.58GB/s ± 2%   1.61GB/s ± 3%   +1.74%  (p=0.043 n=10+10)
WriteTo512k-12              1.57GB/s ± 2%   1.59GB/s ± 2%   +1.20%  (p=0.028 n=10+9)
WriteTo1MiB-12              1.56GB/s ± 3%   1.57GB/s ± 3%     ~     (p=0.190 n=10+10)
WriteTo4MiB-12              1.53GB/s ± 1%   1.56GB/s ± 4%     ~     (p=0.063 n=10+10)
WriteTo4MiBDelay10Msec-12   97.4MB/s ± 0%  104.4MB/s ± 0%   +7.14%  (p=0.000 n=9+10)
WriteTo4MiBDelay50Msec-12   20.5MB/s ± 0%   22.7MB/s ± 0%  +10.48%  (p=0.000 n=10+10)
WriteTo4MiBDelay150Msec-12  6.93MB/s ± 0%   7.67MB/s ± 0%  +10.66%  (p=0.000 n=10+10)

name                        old alloc/op   new alloc/op    delta
WriteTo1k-12                  15.3MB ± 0%     13.3MB ± 0%  -12.99%  (p=0.000 n=10+9)
WriteTo16k-12                 15.3MB ± 0%     13.3MB ± 0%  -13.01%  (p=0.000 n=10+9)
WriteTo32k-12                 15.3MB ± 0%     13.3MB ± 0%  -13.00%  (p=0.000 n=10+10)
WriteTo128k-12                15.3MB ± 0%     13.3MB ± 0%  -13.01%  (p=0.000 n=10+10)
WriteTo512k-12                15.3MB ± 0%     13.3MB ± 0%  -13.02%  (p=0.000 n=10+9)
WriteTo1MiB-12                15.3MB ± 0%     13.3MB ± 0%  -13.02%  (p=0.000 n=10+10)
WriteTo4MiB-12                15.3MB ± 0%     13.3MB ± 0%  -13.01%  (p=0.000 n=10+9)
WriteTo4MiBDelay10Msec-12     15.3MB ± 0%     13.3MB ± 0%  -13.24%  (p=0.000 n=10+9)
WriteTo4MiBDelay50Msec-12     15.3MB ± 0%     13.3MB ± 0%  -13.26%  (p=0.000 n=10+9)
WriteTo4MiBDelay150Msec-12    15.3MB ± 0%     13.3MB ± 0%  -13.25%  (p=0.000 n=10+7)

name                        old allocs/op  new allocs/op   delta
WriteTo1k-12                   2.29k ± 0%      2.22k ± 0%   -3.04%  (p=0.000 n=10+10)
WriteTo16k-12                  2.29k ± 0%      2.22k ± 0%   -3.01%  (p=0.000 n=10+10)
WriteTo32k-12                  2.29k ± 0%      2.22k ± 0%   -2.96%  (p=0.000 n=10+10)
WriteTo128k-12                 2.29k ± 0%      2.22k ± 0%   -2.96%  (p=0.000 n=10+10)
WriteTo512k-12                 2.29k ± 0%      2.22k ± 0%   -2.96%  (p=0.000 n=10+10)
WriteTo1MiB-12                 2.29k ± 0%      2.22k ± 0%   -2.94%  (p=0.000 n=10+10)
WriteTo4MiB-12                 2.29k ± 0%      2.22k ± 0%   -3.00%  (p=0.000 n=10+10)
WriteTo4MiBDelay10Msec-12      2.68k ± 0%      2.62k ± 0%   -2.37%  (p=0.000 n=10+10)
WriteTo4MiBDelay50Msec-12      2.71k ± 1%      2.62k ± 0%   -3.23%  (p=0.000 n=10+10)
WriteTo4MiBDelay150Msec-12     2.71k ± 1%      2.63k ± 1%   -2.95%  (p=0.000 n=10+10)

Copy link
Collaborator Author

@puellanivis puellanivis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😭 my clean happy isolated code… now the sendPacket has to be split up anyways.

@@ -638,6 +622,30 @@ func (c *Client) close(handle string) error {
}
}

func (c *Client) stat(path string) (*FileStat, error) {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It’s kind of odd that we were converting the FileStat from Stat into an os.FileInfo to extract the size, while the fstat didn’t. This aligns the two to work the same way, which simplifies the use point in WriteTo.

Comment on lines +44 to +48
// isRegular returns true if the mode describes a regular file.
func isRegular(mode uint32) bool {
return mode&S_IFMT == syscall.S_IFREG
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check is not really OS dependent, as it works through the internals of SSH flags. With the filexfer this can be done in a sufficiently generic way. I just put this here, as a convenience before the other filexfer implementation.

Copy link
Collaborator

@drakkan drakkan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this PR could solve the reported issues however our users seem too lazy to test and give us a feedback. @puellanivis if you agree I'll merge this and include it in a new release. This seems the only way to get a feedback

@puellanivis
Copy link
Collaborator Author

While I’m OK with merging this and including it in a new release, I would prefer to not issue a new release with the filexfer subpackage in the state it is (sure, it’s internal, so no one can be using it, but it should still be fixed before any release).

Let’s go ahead and merge this, I’ll copy over the fixes for the filexfer subpackage from the other PR, and then we can merge both and release. Sound good?

@drakkan
Copy link
Collaborator

drakkan commented May 21, 2021

Good thank you. Shoud we do something more for #426 before the next release? This weekend I think I could find 1-2 hours if you want some help there

@puellanivis
Copy link
Collaborator Author

Ah yes. Adding a ReadFomWithConcurrency() receiver method. No, I think I should be able to get that done quite quickly as well. I’m currently between jobs, so I have spare time. :)

@puellanivis puellanivis merged commit de44fbb into master May 22, 2021
@puellanivis puellanivis deleted the patch/sequential-concurrent-read-requests branch May 22, 2021 15:46
@puellanivis puellanivis mentioned this pull request Jun 1, 2021
@drakkan drakkan mentioned this pull request Jun 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants