Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fcntl: too many open files #52

Closed
tommyblue opened this issue Feb 12, 2020 · 4 comments · Fixed by #131
Closed

fcntl: too many open files #52

tommyblue opened this issue Feb 12, 2020 · 4 comments · Fixed by #131
Assignees
Milestone

Comments

@tommyblue
Copy link

tommyblue commented Feb 12, 2020

I'm on MacOS 10.15.3 and I'm trying to upload to S3 a folder that contains 2616 folders with 1 to 10 files each.

With s5cmd -stats -r 0 -vv cp -n --parents <src> <dest> I immediately see this error:

VERBOSE: wildOperation lister is done with error: fcntl: too many open files

but the uploads seem to proceed, though the uploaded files at the end are generally less than 100.
Stats output:

2020/02/12 15:05:24 # All workers idle, finishing up...
2020/02/12 15:05:24 # Stats: S3             119   52 ops/sec
2020/02/12 15:05:24 # Stats: Failed         130   57 ops/sec
2020/02/12 15:05:24 # Stats: Total          249  109 ops/sec 2.282740841s

If I run the same command with the -numworkers 16 option, the copy ends without errors and all files are correctly uploaded to S3

~$ ulimit -H -n
unlimited

~$ ulimit -S -n
256

~$ launchctl limit maxfiles
maxfiles    10240          10240
@igungor
Copy link
Member

igungor commented Feb 24, 2020

Hey @tommyblue, thanks for filing an issue.

~$ ulimit -H -n
unlimited

~$ ulimit -S -n
256

~$ launchctl limit maxfiles
maxfiles 10240 10240

In addition to that, could you share the output of ulimit -n please.

s5cmd walks given <src> directory and sends each file to an internal queue, where workers receive from. By default, 256 workers are up and running and each one tries to open the file they received from the queue. The limit here is your OSs open file descriptor limit. I suspect your open file descriptor limit is around 256.

Increasing the limit to a high value should work.

@tommyblue
Copy link
Author

ulimit -n => 256

@tommyblue
Copy link
Author

Your diagnosis and proposed solution are certainly correct, but I think the behaviour of s5cmd under this situation is misleading. Moreover the error message is tagged "VERBOSE" so it's probably hidden without the -vv flag.

Some possible solutions:

  • Show this kind of errors also without the verbose flag
  • check ulimit against the number of configured workers at startup and notify the user if the values are incompatible
  • catch the error and show a suggestion at the end of the output. Something like: "s5cmd found that the configured ulimit is too low for the number of worker you're using. Either increase the ulimit value or decrease the number of parallel workers"

@igungor
Copy link
Member

igungor commented Feb 24, 2020

@tommyblue You're right. The first one is definitely needed. If there is a non-retriable error, it needs to be shown without a verbose flag.

Others are good suggestions also. Let's keep the issue open till we've a fix.

Thanks again.

@ilkinulas ilkinulas added this to the v1.0.0 milestone Mar 2, 2020
@aykutfarsak aykutfarsak self-assigned this Mar 24, 2020
igungor pushed a commit that referenced this issue Mar 31, 2020
It tries to increase the soft limit of open files to avoid hitting the limit by the OS.

It also catches too many open files errors and types a warning that includes a suggestion about -numworkers argument. Then exits instantly.

Fixes #52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants