-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Can't share file with swarm.key, even in local network #3482
Comments
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
Finally, remember to use https://discuss.ipfs.io if you just need general support. |
Here is the code using swarm.key: 'use strict'
const TCP = require('libp2p-tcp')
const MPLEX = require('libp2p-mplex')
const SECIO = require('libp2p-secio')
const Protector = require('libp2p/src/pnet')
const initLibp2pConfig = (swarmKey) => {
return {
modules: {
transport: [TCP], // We're only using the TCP transport for this example
streamMuxer: [MPLEX], // We're only using mplex muxing
// Let's make sure to use identifying crypto in our pnet since the protector doesn't
// care about node identity, and only the presence of private keys
connEncryption: [SECIO],
// Leave peer discovery empty, we don't want to find peers. We could omit the property, but it's
// being left in for explicit readability.
// We should explicitly dial pnet peers, or use a custom discovery service for finding nodes in our pnet
peerDiscovery: [],
connProtector: new Protector(swarmKey)
}
}
}
module.exports = initLibp2pConfig and the following is IPFS config: const fs = require('fs')
const initLibp2pConfig = require('./libp2p_config');
const swarmKeyPath = './swarm.key';
module.exports = {
ipfs_config: {
EXPERIMENTAL: {
pubsub:true
},
relay: { enabled: true, hop: { enabled: true, active: true } },
libp2p: initLibp2pConfig(fs.readFileSync(swarmKeyPath)),
repo: './jsipfs-open',
config: {
Addresses: {
Swarm: [
'/ip4/0.0.0.0/tcp/4002',
'/ip4/0.0.0.0/tcp/4003/ws'
]
}
}
},
swarm_address: '/ip4/119.3.10.219/tcp/4002/p2p/QmPWfQStHnoc154zYW2gWFAHYWkuqxuc3v8dZQw29RqQV4'
} the following is IPFS creation: const ipfs = await IPFS.create(config.ipfs_config);
await ipfs.bootstrap.add(config.swarm_address);
console.log('connecting swarm', config.swarm_address)
let result = await ipfs.swarm.connect(config.swarm_address); |
Some observations:
B needs to be able to discover A as a provider of the piece of content. This works on a local network as they've probably discovered each other as peers via mDNS so can then use bitswap to exchange blocks. On different public networks B would need to issue a DHT findProvs query that A would then answer. JS support for DHT queries is still a WIP so it's unlikely to work. In the mean time if C has the blocks from A it can provide them to B as you've found out. You can configure C as a preload node for A and B which means when you add a file to A it'll push it to C which will make it available to B. |
thanks for your quick reply. Thanks. |
You can configure A to use a DHT delegate to publish a provider record, and then configure B to use a different DHT delegate to find the provider record. See the delegate example here: https://github.com/achingbrain/dht-delegate-example js-IPFS is configured with the public DHT delegates by default when started as a daemon but you'll need to configure it yourself if starting it programatically. |
It's great! Thanks. |
It's really better if you just experiment with the example to learn how these things fit together. Clone it, install the deps, run the example, read the comments. Delete the bits you think you don't need, run it again, does it still work, etc. This way you'll learn a lot more about how IPFS works and you'll be better prepared for the next problem you encounter. |
Hello all experts,
I have three ipfs nodes like A, B, C, and I connected both A, B to C, using ipfs.swarm.connect(C's Multiaddr) method.
And I tested that if I uploaded file from A, I cannot cat the file in B, until C had catted it, even in local network.
By the way, I'm using swarm.key for the libp2p configuration. Without swarm.key, it's surely no problem for A and B to share file in local network, but cannot in different network, like one in public, one in local network.
So how can I achieve that A and B can share file without C.
Thanks.
The text was updated successfully, but these errors were encountered: