-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running in Google Cloud #9
Comments
the boot-node-setup pods are running on all kubernetes nodes, as it's a the monitor deployment consists of 2 containers, regarding the second issue, i've never used a loadbalancer in this context, so i have no real experience what could be wrong there. although your monitor config looks fine to me, in case a node doesnt appear in the monitor, can you try to scale down the monitor replicas to 0, and then to 1 again? this should re-write the config and start up the monitor again, maybe it was a race condition... not sure if that helps |
Okay great cheers that makes sense I only started digging into the different clusters yesterday when trying to figure out how the monitoring was working. I started from scratch as I delete my cluster once testing for the day to avoid charges, both nodes are now showing up in the monitor! 2 new problems:
P.S. I may open a PR back to here with my updated yaml.erb template which allows you to specify if you want to expose a miner or monitor 👍 I may also add in code to auto generate multiple nodes of the same type just with incremental NodePorts |
might be, it's hard to say from here. you could check also on the geth console if it has the peer. if both are connected to the same bootnode, and have the same genesis block they should show up as peers
the kubelet probably ran the other miner pod quicker depending on the node it ran on, which might have had more resources available at that time. in short, kubernetes has no notion of sequences. if you want to set up something sequentially you'd need to do that within one single unit (container)
have a look at https://github.com/MaximilianMeister/kuberneteth/blob/master/scripts/generate_nodes.sh#L24 - you'd just need to add |
Ah okay I see, it looks like because i had auto scaling on it messed up the normal flow of the deploy having to wait for more vm nodes to spin up. This will probably mean that you need to know roughly how many nodes you'll need minimum in your cluster before running the deploy to prevent race conditions while waiting for vm nodes to boot up. For anyone reading this later: I found that you need a minimum Machine type of n1-standard-1 (1 vCPU, 3.75 GB memory) and 3 of them minimum. Using anything less ran out of resources and required more machines. When your miners try to generate the dag and start mining they can get stuck on smaller machines as i guess they run out of memory. Oh i didn't realise that's what that script was for. I was going to implement it in the YAML so you didn't have to run another script. Will look into it though 👍 One more question... Do you know of any good testing tools i can run along side to push transactions through etc. I know there is ethereum test-tools but i was hoping to test along these lines:
I know for point 2 I can use stackdriver, just unsure of how to find the disk usage of the ledger only? |
good idea, feel free to submit sth. at any point you think it's usable 👍
maybe https://github.com/ethereum/go-ethereum/wiki/Metrics-and-Monitoring#querying-metrics ? looks promising but you'd need to implement your own program to get down to the specifics or https://github.com/ethereum/wiki/wiki/Benchmarks i'd recommend to ask on reddit or some broader channel than this repo, you will likely get a better hint from someone else in the developer's community |
Great thank you, i was just checking to see if maybe you have spun up something along side this setup before 😄 I've got the yaml working now, generating multiple nodes. Will do a PR soon 👍 However I'm still sometimes encountering the problem of the nodes not discovering each-other as peers and when i reboot and they connect then they flicker between 1 and 0 peers 😕 I'm running 3 miners all with the same genesis and boot node... Shouldn't they all have 2 peers? and not keep changing? |
this is likely a polling/connection issue in eth-netstats, that gets the data through net-intelligence-api. if the geth console shows it as a peer it should be all good! EDIT: check the logs of the 2 monitor containers, maybe there are some hints |
Looks like it was just due to race conditions again! Very annoying, oh well, after rebooting they are now all connected as expected 👍 P.S. I've submitted my PR (#10) with the changes mentioned in this thread |
Hi Maximilian, I've just encountered another problem though. All of a sudden the init container can't start because it is getting permission denied:
Any ideas? This is running on the |
stale |
I've been trying to get this running in Google cloud and just wanted to check a few things that don't seem right...
First: there are 3 boot-node-setup pods and 2 monitor replicas, is that normal?
Second: I've exposed 1 mining node and the Monitor with a LoadBalancer so that I can use it as a Provider for a local testing app and view the monitor. As mentioned in #8 I'm not able to see any allocated funds only funds mined into an Etherbase, I'm not sure if this is related? More importantly I can't see the second miner in the Monitor. miner0 does have miner1 as a peer though.
Monitor-config:
Here are my services:
Hope you can shine some light on this,
Many thanks.
The text was updated successfully, but these errors were encountered: