Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ipfs add FAILS with "too many open files" #4589

Open
gwpl opened this issue Jan 17, 2018 · 7 comments
Open

ipfs add FAILS with "too many open files" #4589

gwpl opened this issue Jan 17, 2018 · 7 comments
Labels
topic/badger Topic badger topic/repo Topic repo

Comments

@gwpl
Copy link

gwpl commented Jan 17, 2018

Version information:

$ ipfs version --all
go-ipfs version: 0.4.13-
Repo version: 6
System version: amd64/linux
Golang version: go1.9.2

Type:

Severity:

High

Description:

$ ipfs add vm.ova 
[===================================================================================>-------------------------------------------------]  63.06% 11m7s13:41:18.781 ERROR commands/h: open /home/gwpl/.ipfs/blocks/PV/put-347989074: too many open files client.go:247
Error: open /home/user/.ipfs/blocks/PV/put-347989074: too many open files
@gwpl gwpl mentioned this issue Jan 17, 2018
11 tasks
@kevina
Copy link
Contributor

kevina commented Jan 18, 2018

@gwpl what is the output of ulimit -n?

@gwpl
Copy link
Author

gwpl commented Jan 18, 2018

$ ulimit -n
1024
, but go-ipfs reports "succesfully increased file descriptors limit to 2048".
Imho issue might be that filesystem is on fuse (sshfs) and it may have limit on file descriptors.

Therefore, it may be nice for go-ipfs to implement gracefull degradation mechanism and adopt on the go to failures and work with as much handlers as it has, instead of crashing when new ones can not be obtained (or alternatively, to make option so I could limit to order of magnitude less file descriptors, e.g. 128).

@magik6k
Copy link
Member

magik6k commented Jan 18, 2018

Most of the too many files open fixes including connection limiter are already in 0.4.13 so this is new..

Can you try to monitor the daemon with the following command(may require you to install jq):

export DPID=$(pidof ipfs); watch -n0 'printf "sockets: %s\nleveldb: %s\nflatfs: %s\n" $(ls /proc/${DPID}/fd/ -l | grep "socket:" | wc -l) $(ls /proc/${DPID}/fd/ -l | grep "\\/datastore\\/" | wc -l) $(ls /proc/${DPID}/fd/ -l | grep "\\/blocks\\/" | wc -l); netstat -anpt 2>/dev/null | grep "$DPID/ipfs" | sort -k6 | column -N "a,b,c,d,e,f,g" -J | jq ".table[].f" --raw-output | uniq -c'

And report what are you seeing when adding the data?

@gwpl
Copy link
Author

gwpl commented Jan 18, 2018

sockets: 857
leveldb: 7
flatfs: 1
      4 LISTEN
      1 SYN_SENT
      7 ESTABLISHED
      3 SYN_SENT
     34 ESTABLISHED
      1 SYN_SENT
    107 ESTABLISHED
      6 SYN_SENT
    207 ESTABLISHED
     11 SYN_SENT
    244 ESTABLISHED
      8 SYN_SENT
    105 ESTABLISHED
      2 SYN_SENT
     20 ESTABLISHED
      2 SYN_SENT
     80 ESTABLISHED
      5 SYN_SENT

Please note that this is not during crash as:

  • ipfs add crashes in moment, so I have no tool to make snapshot at this very exact moment
  • ipfs add crash I reported is not all time reproducible. First few times I added file it crashed, and at some point it "clicked" and since than it works for this given file.

Anyhow, I see in ipfs daemon output a lot of warnings like:

17:42:03.147 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:03.377 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:03.896 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:04.721 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:05.382 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:06.249 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:07.341 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:07.770 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:09.524 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:09.872 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:10.792 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:11.461 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:12.848 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:13.084 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:14.228 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:16.907 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:16.918 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:19.244 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:20.384 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:20.587 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:20.946 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:21.614 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258
17:42:21.994 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:22.834 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:26.229 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:27.138 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:28.323 ERROR  providers: error reading providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:263
17:42:29.804 ERROR  providers: error adding new providers: write /home/user/.ipfs/datastore/000093.log: input/output error providers.go:258

Please note that both df -lha and df -i report plenty of available space and inodes.

@kevina
Copy link
Contributor

kevina commented Jan 18, 2018

@gwpl can you also try increasing the ulimit to something really high (ulimit -n 65536 for example) and see if that helps.

@Stebalien
Copy link
Member

Using the experimental badger datastore should help significantly but, in general, IPFS is not going to work well over SSHFS. IPFS expects the datastore to be on the local machine. The issue here is likely that go-ipfs is opening files faster than it can write to them because SSHFS is so slow.

Note: We should have some form of open fd tracking/limit in the flatfs (current default) datastore. However, we're planning on deprecating it for badgerdb once we deem it stable anyways so I kind of doubt that'll get done before then.

@gwpl
Copy link
Author

gwpl commented Jan 20, 2018

Hi! I was not aware of badger datastore. If work on specialised store is in progress, let's wait for it! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic/badger Topic badger topic/repo Topic repo
Projects
None yet
Development

No branches or pull requests

5 participants