-
Notifications
You must be signed in to change notification settings - Fork 25
Can't connect peer to bootstrap as in tests #32
Comments
Probably a problem with the cluster secret used by each peer. Note that this project is solely for running a number of automated tests on ipfs/ipfs-cluster and not for deploying any of them for real-world-use within kubernetes. |
I understand kubernetes here is purely for testing purposes, but just want to clarify some stuff if it's possible. Why do tests do not require the same secrets across all peers? I thought about same secret, but got some strange issue that folders I don't have such issue in docker and two local volumes for each daemon. |
They do, afaik they just run a custom container which ensures that. Other than that, I am not sure why your /data folders are not persistent. |
Seems like I know the root of my problem: |
I don't fully understand how VOLUME directives affect kubernetes, but maybe want to open an issue and explain? We can fix the dockerfiles if there's a way to improve them... |
This is how to fix this behavior. NEEDS two volumes. |
@mikhail-manuilov, would you want to send in a pull request with the changes you're proposing? I'll be happy to look it over and approve it once I confirm it meets our requirements. |
Since there's kubernetes definition files are for testing purposes only, and posted above tested only in Azure cloud. Also I suppose having two volumes for one container is no-good, maybe |
Hello, I've created ipfs-cluster 4 ipfs-cluster nodes in kubernetes using examples from here.
Also I created service to interconnect nodes:
Then I run script to add peers to bbotstrap peer (as in init.sh)
This is log of bootstrap pod:
Tcpdump shows normal TCP\IP flow:
The text was updated successfully, but these errors were encountered: