-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
io.Copy data losses #440
Comments
UseConcurrentReads should not produce data loss. Can you please provide a runnable program that demonstrates the problem you are seeing. Thank you |
We also recently merged a feature: #436 which ensures that reads are issued in sequential order, even though they are processed concurrently. We had asked if anyone could help us test to see if this resolved writing issues like this. If you could try the master branch, to see if the issue repeats, we would definitely be happy. |
Hi, this is a function which copies file,nothing special: func copyFileFromSFTP(p STFPwithPrivateKey, path string, filename string, localfolder string) (int64, error) {
var fileSize int64
keyfile, err := ioutil.ReadFile(p.privateKeyPath)
if err != nil {
return fileSize, errors.New("could not find privat key file: " + err.Error())
}
signer, err := ssh.ParsePrivateKeyWithPassphrase(keyfile, []byte(p.Password))
if err != nil {
return fileSize, err
}
config := &ssh.ClientConfig{
User: p.User,
Auth: []ssh.AuthMethod{
ssh.PublicKeys(signer),
},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
}
addr := p.Host + ":" + p.Port
sshConnection, err := ssh.Dial("tcp", addr, config)
if err != nil {
log.Fatal(err)
}
defer sshConnection.Close()
client, err := sftp.NewClient(sshConnection, sftp.UseConcurrentReads(false))
if err != nil {
return fileSize, err
}
f := path + filename
sftpfile, err := client.OpenFile(f, os.O_RDONLY)
if err != nil {
return fileSize, errors.New("can't read file from SFTP server." + err.Error())
}
defer sftpfile.Close()
copyfolder := filepath.Join(localfolder, path)
err = os.MkdirAll(copyfolder, 0770)
if err != nil {
return fileSize, errors.New("can't create a folder to copy: " + copyfolder + ", error: " + err.Error())
}
ffl := filepath.Join(copyfolder, filename)
lf, err := os.Create(ffl)
if err != nil {
return fileSize, errors.New("can't create copy file: " + ffl + ", error: " + err.Error())
}
defer lf.Close()
fileSize, err = io.Copy(lf, sftpfile)
return fileSize, err
} I added |
Hi, can you please try git master? Fom your project folder execute:
rebuild your project and repeat the test, please let us know if the reported issue is fixed this way. Thank you! P.S. which SFTP server are you using? |
Guys !!! it worked, really I haven't got any issues. github.com/pkg/sftp v1.13.1-0.20210522170736-5b98d05076b8 but I'm not sure that should I keep this option or not. But I would like to mention that speed of copy much faster. |
sftp server is :
|
@vladimirschuka thank you for confirming, the version you are using will become v1.13.1 in few days, we don't plan other changes before the next release, so it is safe to use. Please let us know if you encounter any issues using this version, also concurrent writes should be fine with this version |
🎉 OMG, such a hard bug to track down. A lesson in carefully checking your assumptions at all times. |
Fixed in v1.13.1 |
I am still getting an issue with reading concurrently from a remote server on 1.13.1. I do not have any control over the server but I think the issue is that To clarify my issue: I see incorrect file sizes where |
🤔 Hm. I wasn’t really expecting that |
@croachrose would you be able to try out #443 and see if it is solving your issue? |
@puellanivis Looks like I consistently get a file that is 512 bytes too large. I can do some more digging to see if I can track down why. |
So the way our code is reading the file is kinda like this (this is not code under our control):
From
That looks like it increments the number of bytes whether or not an error occurred (or eof). |
🤔 hm, I’ll see what I can think out. We should only be returning non-zero and |
I think I found it. The remote server is not sending
|
🤔 so that change has a race condition, so we can’t use that. but let me think on this a bit more. |
I pushed a change that should accomplish the same thing. After reading all of the docs, a short read should imply end of file. This is not guaranteed for a non-regular file, but its the best we have here. |
@puellanivis that did it, thanks! |
Hi guys, will add my couple of word but about UseConcurrentRead. This option "made my day". I just did how it is written in all documentations:
And copied files were sometimes the same size, sometimes not and definitely that depend on something, but I didn't get what is going on ,because files were exactly 32768 byte less than original. It is actually good that I read a lot and now I know much better about io.read and io write and bufio and .... a lot. But still!
When I had done
maybe it became slower but I got full files without loses.
We definitely should add something in the go doc about
UseConcurentRead
.And I still don't understand what should I do to have concurrent read and how work with it properly.I keep
sftp.UseConcurrentReads(false)
Please tell me is it bug or it is expected behavior.
Thank you
The text was updated successfully, but these errors were encountered: