-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"too many open files" without doing anything #5739
Comments
Can you try running |
My |
Mine doesn't either. You can try just: |
The numbers go around this: sockets: 941 On the logs I'm also seeing |
I've stopped using my old repo since it has become unusable and started a new one. Today I tried to run the daemon on the old repo again and saw this:
(Please note that I didn't interrupt the daemon or sent any signal to it.) My other repo still works, however. Is this due to the fact that it has much less files in it? So IPFS will not work if I have a lot of files? |
There is definitely something weird happening with leveldb, can you run |
|
That's quite large leveldb. Are you using filestore/urlstore? |
I don't know what these words mean. Is it possible that I may be using them in this case? |
I have now two repos on my own machine, the one that was showing this problem has some data I wanted to take out of it and move to the other node, but I can't, because it can't stay alive to serve the data to the other. I see
Is there an emergency measure I can take just to save my files? |
Those are options in your IPFS configuration, could you share your |
config {
"API": {
"HTTPHeaders": {
"Access-Control-Allow-Methods": [
"PUT",
"GET",
"POST"
],
"Access-Control-Allow-Origin": [
"http://127.0.0.1:5001",
"https://webui.ipfs.io"
],
"Access-Control-Allow-Credentials": [
"true"
]
}
},
"Addresses": {
"API": "/ip4/0.0.0.0/tcp/5003",
"Announce": null,
"Gateway": "/ip4/127.0.0.1/tcp/7073",
"NoAnnounce": null,
"Swarm": [
"/ip4/0.0.0.0/tcp/4003",
"/ip6/::/tcp/4003"
]
},
"Bootstrap": [
"/ip4/127.0.0.1/tcp/4001/ipfs/QmPaTioHoNL66UA5oXqAKiTLALkqm4R39kuFtm4Kz99Kkh"
],
"Datastore": {
"BloomFilterSize": 0,
"GCPeriod": "1h",
"HashOnRead": false,
"Spec": {
"mounts": [
{
"child": {
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"sync": true,
"type": "flatfs"
},
"mountpoint": "/blocks",
"prefix": "flatfs.datastore",
"type": "measure"
},
{
"child": {
"compression": "none",
"path": "datastore",
"type": "levelds"
},
"mountpoint": "/",
"prefix": "leveldb.datastore",
"type": "measure"
}
],
"type": "mount"
},
"StorageGCWatermark": 90,
"StorageMax": "10GB"
},
"Discovery": {
"MDNS": {
"Enabled": false,
"Interval": 10
}
},
"Experimental": {
"FilestoreEnabled": true,
"Libp2pStreamMounting": false,
"ShardingEnabled": false
},
"Gateway": {
"HTTPHeaders": {
"Access-Control-Allow-Headers": [
"X-Requested-With",
"Range"
],
"Access-Control-Allow-Methods": [
"GET"
],
"Access-Control-Allow-Origin": [
"*"
]
},
"PathPrefixes": [],
"RootRedirect": "",
"Writable": false
},
"Identity": {
"PeerID": "QmZFLSEiUELWrT91KGLNDzGx5dkzVpQhoBjoLi6t1U3jdP",
"PrivKey": "CAASqAkwggSkAgEAAoIBAQDLpgiaJqyr4SdO5XOrTIvsXOvRhpnG1d+o4IscUnqbG6qDE3OxzHZvNyK+WKLsOWY+veVmxaSIFEuJXSALat3cIw5EKp+7fQscQNwzV5lW4pKZyET4bHinHwNWcaMXj8KA3tM8E1BJR4T82LhgYVFcbm+7Bt4wSiP6K9G5jLxy9hKCD+mA4U4cNGQWbLRtKkgVd5Y2QTHSB/QNjymPruaDmaPRFwsa5RObViJL0VJSnqmXOCxA5zWSq29qzpV5qiho5kyQqsKZ3COe9KJkdBGFvZzn5RAPzo2eJ79RmDQ3KWqwp/tO+CMlA6h1tFxSgM6EBjSFG/EZA5L1DP36y2NJAgMBAAECggEBALOaiAWjzC9+UBOV63CM/u6DePr+McsZvrqK5kUhPL5lJPmbAzMwttcZEkw7kdyyNslo4tPDxXq6I3BPMD7Bjk9in2dhDCTngA/35/xj6nmlM1PrO2C5EaOah3AKoqLaB9luK2/VPL6UE+aHH/zod0AEqgeRZA3EpXwyfzGcvGrJpfC9RpfoCIzMgV0a6y3iVjXih6ltpxZikqZknfI3WrH8uLJgG19pv5nRpSWxzgkwkeLoUikv7hh+pqG6LqtmLpUbwmkQNMfZh/fSOQ5ZqMTVXbUFLrytoRHUY4fB0nRz1tflP3aN/yuTg9NCmM96H0QGoHIoU+qqRQhBUs+LWA0CgYEA3/HKiYxcyDKEyhTDnyM4CbFe5B5e17DiCOxCTx0txfZh6kRoM2qoB5IsGXnNMUZGvC7WHt6QxbmACYdgL2bMsHYTRgE4z6Rx7qWvTrwStABkU3vmKoGT9FDHDaG6MENVinipki978g/FX+peZp/KkTQ/Rrw6SDKzpIP4gym44RcCgYEA6MyFCI3XyxsLR5l+ogxBQBYdG+6pKAE05vC097hgkaTSunzzl8GB1N9sQTTO8NLgiPqwoR/xxANwHZb/lG90/VXWFpp9GQ/z8fj237oRMWC8pLNJo7nRo1z9CEjw94A8DWo04hDnAHCJxZtGPq5hZoGlL4A2qv/FJmbPybNG2p8CgYAnbDg8aJI4x/PqYydgz2FhC3Fp9RK7I69W5Mhzhu505/+qruotCvyTgJ70ySVfJED1hcU53/JabGJmywcasR0df1u7OiHXI9rOqSooUSF1wI/oxmnpV7BFFSdFdhAByQi4/K7VRjiqjy4uyWJe7IhLcYgmGqKj7REEyBqqdGDQdwKBgQCbhI1WwpMnTuDBKyxqiu9IJb26fDwqymuR37m1R0nT4h0YkgKVHaNjFwKVqPaZ8PYo6/f1G4cCIB3U1pvUiITJ/H6xyPDLPloECwK5QO7dYreC+3a1VpxSmvs6fqfjX5o+h/XeE9aN96BCD1Hk68+LkA5O5kMfBxCob8ReBVLPFwKBgFOMYYVla0uNNru8UCxlOgEq9Uvfshwc2oztbrDZlLmihoTEd72jWZevq2PWBxWGfvOmtqW1VxBY05pxuvbJQykKbZby00VPOPHuGIYnHLz/7+4Tem2L163ANsFpd/MPgERGeY4Rr9JKA2lFpeC5QiAbHFOduAu3EU8r/xErwUJ"
},
"Ipns": {
"RecordLifetime": "",
"RepublishPeriod": "",
"ResolveCacheSize": 128
},
"Mounts": {
"FuseAllowOther": false,
"IPFS": "/ipfs",
"IPNS": "/ipns"
},
"Reprovider": {
"Interval": "12h",
"Strategy": ""
},
"Routing": {
"Type": "none"
},
"SupernodeRouting": {
"Servers": null
},
"Swarm": {
"AddrFilters": null,
"ConnMgr": {
"GracePeriod": "",
"HighWater": 2,
"LowWater": 1,
"Type": ""
},
"DisableBandwidthMetrics": false,
"DisableNatPortMap": false,
"DisableRelay": false,
"EnableRelayHop": false
},
"Tour": {
"Last": ""
}
} datastore_spec {
"mounts": [
{
"mountpoint": "/blocks",
"path": "blocks",
"shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
"type": "flatfs"
},
{
"mountpoint": "/",
"path": "datastore",
"type": "levelds"
}
],
"type": "mount"
} |
By the way, my datastore has increased to
That has happened despite I having done nothing at all with this repo (aside from trying to start the daemon on it some times and failing). |
|
Note that me asking for your entire |
So maybe disabling the Filestore feature (which I'm not sure how it got turned on in the first place) could help alleviate the problem to let the node run long enough to extract the information you want,
but if the data you look for was encoded in Filestore in the first place you won't be able to access it through IPFS (although that would also mean that the data is saved in actual files outside the repo). Also, note that you don't need to have the daemon running to query your local data, just running the |
Don't worry about the private key, I've deleted part of it before posting, but I also don't intend to use it anymore. Oh, so
To query the local data you mean using |
Yes.
Yes, the same command you intended to run with the daemon running.
Post it here (since I'm suspecting there's a chance they may be related to a more general problem in your environment) and we can open an issue later if necessary. |
Why is this odd? Yesterday I was trying to fetch this same hash using my other repo [1]: I think I was able to fetch the index from the ipfs.io public gateway, since that w.alhur.es is/was a page being served from there. |
That is an unfortunate error message that we need to fix (#5784), it normally happens when the daemon doesn't clean up correctly its state, deleting the |
@schomatis can you help me with that failed |
Yes, in general disregard the process bar, in many cases it's not accurate, trust the |
Well, the data should be all this failing node, but if it is failing I think I'll stop assuming it's a bug and try to fetch each subdirectory individually and see what happens. |
Yes, it would help identify if there's a particular block missing or if we just have many blanks across the DAG. |
Note:Tthat won't really change anything at this point (except it might make it impossible to find some blocks). Really, I wouldn't disable that once you've enabled it. Do you really have 941 connections? Try The ConnMgr section also looks a bit funky. You may want to set it to: "ConnMgr": {
"GracePeriod": "30s",
"HighWater": 100,
"LowWater": 20,
"Type": "basic"
}, That should significantly reduce the number of open files (assuming you do have 941 open connections). |
I've disabled dht later, to see if things improved. And removed nodes from the bootstrap list and put 1/2 on low/highwater. I don't know why I don't have a "type" on connmgr. |
Version information:
go-ipfs version: 0.4.18-
Repo version: 7
System version: amd64/linux
Golang version: go1.11.1
Type:
Bug
Description:
I start my daemon and see this:
And it goes on forever.
The only command I've run is
ipfs files ls /
, but I don't know what my node is doing, if it is serving some files or what, maybe it is. Is it failing to accept a connection from my own cli?I don't know, but this seems related to ipfs/ipfs-companion#614 and #5738
The text was updated successfully, but these errors were encountered: