-
Notifications
You must be signed in to change notification settings - Fork 1.2k
ipfs-http-client extremely odd behavior with nginx / remote node #2926
Comments
I'm trying to replicate this locally - I had some problems with node crashing with Here's my test script: 'use strict'
const ipfsClient = require('ipfs-http-client')
const { globSource } = ipfsClient
const all = require('it-all')
const map = require('it-map')
const node = {
host: '127.0.0.1',
port: 6001,
protocol: 'http'
}
async function main () {
const ipfs = new ipfsClient({
host: node.host, //this is an ip address for us
port: node.port, //this is port 6001 for us
protocol: node.protocol, // this is http for us
headers: {
Authorization: `Basic an-auth-token`
}
})
console.info(await all(map(ipfs.add(globSource('path/to/dir', { recursive: true })), (thing) => ({
...thing,
cid: thing.cid.toString()
}))))
console.info(await all(map(ipfs.add(globSource('path/to/dir', { recursive: true })), (thing) => ({
...thing,
cid: thing.cid.toString()
}))))
}
main() It seems to work, once I made the change to the nginx config. Could you post a more complete example of how you trigger the problem? |
There's only one place where IPFS requests a lock and that's the ~.jsipfs/repo.lock. Maybe if you were starting up a new daemon on each request to the same repo, I'd expect something like this. |
No, if a second daemon starts up and encounters a lock file in the repo it'll exit with an error. You can see the steps you have to take to start two nodes on the same machine in the running-multiple-nodes example, it involves configuring them with different repos. |
I should clarify here, that we're hitting a go-ipfs node (v0.4.23) that is online 24/7. We aren't spinning up / spinning down a node. |
From further investigations, I tried your code @achingbrain and it worked correctly. This seems to be somehow isolated to my nodeJS / express application. I've once again confirmed that: I'm using multer for file uploads so maybe that's somehow related, but I question the correlation because the multer upload operation finishes before I add to IPFS. |
update: I can confirm that basic auth plays no role here. Even when removing basic auth requirements, it appears to that the reverse proxy is still causing the issue |
@obo20 are you still having problems with this or did you get to the bottom of it? |
@achingbrain I wasn't ever able to figure this out unfortunately. We implemented a workaround to avoid using NGINX entirely for adding content to nodes (however we'd like to move back to NGINX for this if possible). I'm fairly sure this problem still exists. |
Would you be able to put a small demo repo together that shows the problem? Hopefully it'll be a fairly straightforward fix with a reproducible case. |
@obo20 We discussed in triage today and it sounds like this is not a problem in the IPFS code. Without a clear reproduction scenario, is impossible to know. |
@autonome @achingbrain apologies for the delay and getting back to you here. We've been so swamped that I haven't had a lot of time to put together a demo repo for you. Things are working alright for us now as is with our workaround, but I'll try to find some time here in the near future to put together something reproducible for you. For now I wouldn't spend any resources on this. |
👍🏼 Closing for now - reopen if there's a narrower testcase showing the problem is in IPFS not with nginx config, etc. |
The ipfs-http-client is currently doing some extremely strange things with a particular setup we've been testing out. I've tested versions of the client from the current master all the way back to v34 and I'm still getting this issue. The bug can be recreated as follows:
Essentially what's happening is that I add a directory to the remote node and it works. Then I try to do the same thing a second time and it locks up. The only way I can get around this is to restart my nodeJS express API which seems strange as the http-client should be stateless as we're creating a new instance of it each time our API endpoint is called.
The steps to recreate this are as follows.
Step 1: create a temporarily IPFS client for adding to the remote node:
The remote node is on a server and has the default port 5001 setting for the API, but is sitting behind an NGINX config that's reverse proxying port 6001 to port 5001 with basic auth restricting access.
The NGINX config looks like this:
Strangely, we didn't encounter this bug if we directly connected to the remote node by exposing port 5001 on it (and avoiding NGINX / basic auth). But we don't want to be doing this outside of a testing environment for security reasons.
The text was updated successfully, but these errors were encountered: