-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nodes cannot connect to each other after restart #199
Comments
Does a restart ( |
Does the issue consistently appear when switching from Ubuntu to Windows and then back to Ubuntu again? On the surface it looks like a networking issue. The nodes can't even establish an initial connection. |
Hello @WadeBarnes, I also think it is a networking issue. I am in London this week and am using public hotel networks. Since, I am in London, I have not been able to reproduce the issue even after multiple turning On and Off and switching between the two OSes. Before coming to London, I had been working in the university using the University's Intranet. I think the problem might be connected with the IP address of the Dockerhost and its availability. Once I go back to the university (next week), I will create a new VON Network using the University's Network, then switch to a public network and see whether the issue will appear again. |
@vpapanchev, I would suspect the networking configuration and rules on the University's network are likely the issue. They could be blocking the Indy port range 9700-9799. |
Hi, So, here is what I have done to reproduce the error with some additional information:
The IP address of the Dockerhost changes when switching between WLAN and LAN. This does make sense and is normal behavior. The command Cheers PS: I don't currently have access to test with another Ethernet connection but I think that the issue should be independent on the type of LAN as long as the IP address of the Dockerhost changes. |
It's the genesis file that is the issue in this case. It's generated once on startup, if it does not exist, for each node and the webserver.
In your case the genesis file is getting created for one or the other network and then when you switch, the nodes will continue to use the same genesis file to try to connect with each other. |
Yes, and also it makes sense that the IP address of the Dockerhost, i.e., the addresses on which the nodes are accessible, does not change so that nodes and other services can connect to the network. |
Hi @WadeBarnes , can you suggest the solution for the same. MY application was running well for 5 months but unfortunately my docker containers stop responding last week and hence my application servers got down. When I restart the von_network , i can't establish the open pool connection and my nodes aren't communicating with each other. Adding my error logs:` ` |
@PenguinTaro, could you create a new issue for this. The last issue was resolved, and your issue may not have the same root cause. |
Okay |
Hello,
I am getting some really strange behavior. I am running the VON Network locally on my personal computer - dual boot Ubuntu 20.04 and Windows 10. I am running the network on the Ubuntu. Because I switch between the two OSes regularly I need to stop and resume the VON Network on a daily basis. I noticed that sometimes the network wouldn't restart properly. This is how I reproduce the error:
Fresh start after cleaning up the Docker containers:
Now I restart the network:
After that I stopped the network once again and I switched on Windows 10 and continued working there.
The next day: I start the Ubuntu and try to resume the network again. Something goes wrong and the nodes cannot connect to each other. The Webserver also gives an error, but this is because the nodes weren't ready when the server fired up.
Logs are very long. Click to expand.
./manage start --logs
Using: docker-compose --log-level ERROR
Recreating von_node3_1 ... done
Recreating von_node4_1 ... done
Recreating von_node2_1 ... done
Recreating von_node1_1 ... done
Recreating von_webserver_1 ... done
Attaching to von_node1_1, von_node2_1, von_node3_1, von_webserver_1, von_node4_1
node3_1 | start_indy_node Node3 0.0.0.0 9705 0.0.0.0 9706
node1_1 | start_indy_node Node1 0.0.0.0 9701 0.0.0.0 9702
node2_1 | start_indy_node Node2 0.0.0.0 9703 0.0.0.0 9704
node4_1 | start_indy_node Node4 0.0.0.0 9707 0.0.0.0 9708
node4_1 | 2022-03-18 14:33:24,081|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:24,134|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2022-03-18 14:33:24,174|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node2_1 | 2022-03-18 14:33:24,342|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:24,352|INFO|notifier_plugin_manager.py|Found notifier plugins: []
webserver_1 | 2022-03-18 14:33:24,351|INFO|server.py|REGISTER_NEW_DIDS is set to True
node1_1 | 2022-03-18 14:33:24,358|INFO|notifier_plugin_manager.py|Found notifier plugins: []
webserver_1 | 2022-03-18 14:33:24,356|INFO|server.py|LEDGER_INSTANCE_NAME is set to "localhost"
webserver_1 | 2022-03-18 14:33:24,387|INFO|server.py|Web analytics are DISABLED
webserver_1 | 2022-03-18 14:33:24,390|INFO|server.py|Running webserver...
webserver_1 | 2022-03-18 14:33:24,390|DEBUG|selector_events.py|Using selector: EpollSelector
webserver_1 | 2022-03-18 14:33:24,391|INFO|server.py|Creating trust anchor...
webserver_1 | 2022-03-18 14:33:24,392|INFO|anchor.py|Ledger cache will be stored in :memory:
webserver_1 | 2022-03-18 14:33:24,407|DEBUG|core.py|executing functools.partial(<function connect..connector at 0x7ffa239c9ae8>)
webserver_1 | ======== Running on http://0.0.0.0:8000 ========
webserver_1 | (Press CTRL+C to quit)
webserver_1 | 2022-03-18 14:33:24,413|DEBUG|core.py|returning <sqlite3.Connection object at 0x7ffa2980f490>
webserver_1 | 2022-03-18 14:33:24,415|INFO|anchor.py|Initializing transaction database
webserver_1 | 2022-03-18 14:33:24,427|DEBUG|core.py|executing functools.partial(<built-in method executescript of sqlite3.Connection object at 0x7ffa2980f490>, '\n CREATE TABLE existent (\n ledger integer PRIMARY KEY,\n seqno integer NOT NULL DEFAULT 0\n );\n CREATE TABLE latest (\n ledger integer PRIMARY KEY,\n seqno integer NOT NULL DEFAULT 0\n );\n CREATE TABLE transactions (\n ledger integer NOT NULL,\n seqno integer NOT NULL,\n txntype integer NOT NULL,\n termsid integer,\n txnid text,\n added timestamp,\n value text,\n PRIMARY KEY (ledger, seqno)\n );\n CREATE INDEX txn_id ON transactions (txnid);\n CREATE VIRTUAL TABLE terms USING\n fts3(txnid, sender, ident, alias, verkey, short_verkey, data);\n ')
webserver_1 | 2022-03-18 14:33:24,429|DEBUG|core.py|returning <sqlite3.Cursor object at 0x7ffa251c9f10>
webserver_1 | 2022-03-18 14:33:24,433|DEBUG|core.py|executing functools.partial(<built-in method close of sqlite3.Cursor object at 0x7ffa251c9f10>)
webserver_1 | 2022-03-18 14:33:24,434|DEBUG|core.py|returning None
webserver_1 | 2022-03-18 14:33:24,437|DEBUG|pool.py|set_protocol_version: >>> protocol_version: 2
webserver_1 | 2022-03-18 14:33:24,440|DEBUG|pool.py|set_protocol_version: Creating callback
webserver_1 | 2022-03-18 14:33:24,467|DEBUG|pool.py|set_protocol_version: <<< res: None
webserver_1 | 2022-03-18 14:33:24,467|DEBUG|pool.py|list_pools: >>>
webserver_1 | 2022-03-18 14:33:24,467|DEBUG|pool.py|list_pools: Creating callback
webserver_1 | 2022-03-18 14:33:24,470|DEBUG|pool.py|list_pools: <<< res: []
webserver_1 | 2022-03-18 14:33:24,472|INFO|anchor.py|Genesis file already exists: /home/indy/ledger/sandbox/pool_transactions_genesis
webserver_1 | 2022-03-18 14:33:24,472|DEBUG|pool.py|create_pool_ledger_config: >>> config_name: 'nodepool', config: '{"genesis_txn": "/home/indy/ledger/sandbox/pool_transactions_genesis"}'
webserver_1 | 2022-03-18 14:33:24,472|DEBUG|pool.py|create_pool_ledger_config: Creating callback
webserver_1 | 2022-03-18 14:33:24,491|DEBUG|pool.py|create_pool_ledger_config: <<< res: None
webserver_1 | 2022-03-18 14:33:24,491|DEBUG|pool.py|open_pool_ledger: >>> config_name: 'nodepool', config: '{}'
webserver_1 | 2022-03-18 14:33:24,491|DEBUG|pool.py|open_pool_ledger: Creating callback
node3_1 | 2022-03-18 14:33:24,498|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:24,510|INFO|looper.py|Starting up indy-node
node4_1 | 2022-03-18 14:33:24,512|INFO|looper.py|Starting up indy-node
node4_1 | 2022-03-18 14:33:24,580|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,582|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:24,601|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2022-03-18 14:33:24,603|INFO|ledger.py|Recovering tree from transaction log
node2_1 | 2022-03-18 14:33:24,614|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2022-03-18 14:33:24,635|INFO|looper.py|Starting up indy-node
node4_1 | 2022-03-18 14:33:24,651|INFO|ledger.py|Recovered tree in 0.05008614900009434 seconds
node1_1 | 2022-03-18 14:33:24,654|INFO|ledger.py|Recovered tree in 0.05087215000003198 seconds
node3_1 | 2022-03-18 14:33:24,695|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:24,715|INFO|looper.py|Starting up indy-node
node3_1 | 2022-03-18 14:33:24,718|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2022-03-18 14:33:24,720|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:24,720|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,742|INFO|ledger.py|Recovering tree from hash store of size 4
node1_1 | 2022-03-18 14:33:24,742|INFO|ledger.py|Recovered tree in 0.0006725699998924028 seconds
node4_1 | 2022-03-18 14:33:24,743|INFO|ledger.py|Recovering tree from hash store of size 4
node4_1 | 2022-03-18 14:33:24,744|INFO|ledger.py|Recovered tree in 0.00045381200004612765 seconds
node2_1 | 2022-03-18 14:33:24,779|INFO|ledger.py|Starting ledger...
node3_1 | 2022-03-18 14:33:24,785|INFO|ledger.py|Recovered tree in 0.06723432699993737 seconds
node2_1 | 2022-03-18 14:33:24,809|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2022-03-18 14:33:24,826|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:24,830|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,860|INFO|ledger.py|Recovering tree from hash store of size 5
node1_1 | 2022-03-18 14:33:24,862|INFO|ledger.py|Recovered tree in 0.0013548139999102204 seconds
node4_1 | 2022-03-18 14:33:24,861|INFO|ledger.py|Recovering tree from hash store of size 5
node4_1 | 2022-03-18 14:33:24,863|INFO|ledger.py|Recovered tree in 0.0020688349999318234 seconds
node3_1 | 2022-03-18 14:33:24,878|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:24,889|INFO|ledger.py|Recovered tree in 0.07958233100009693 seconds
node3_1 | 2022-03-18 14:33:24,908|INFO|ledger.py|Recovering tree from hash store of size 4
node3_1 | 2022-03-18 14:33:24,909|INFO|ledger.py|Recovered tree in 0.0015737659999786047 seconds
node1_1 | 2022-03-18 14:33:24,953|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:24,963|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,990|INFO|ledger.py|Recovering tree from hash store of size 3
node2_1 | 2022-03-18 14:33:24,991|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,992|INFO|ledger.py|Recovered tree in 0.001595060999989073 seconds
node4_1 | 2022-03-18 14:33:24,994|INFO|ledger.py|Recovering tree from hash store of size 3
node4_1 | 2022-03-18 14:33:24,997|INFO|ledger.py|Recovered tree in 0.002958984000088094 seconds
node3_1 | 2022-03-18 14:33:25,006|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:25,030|INFO|ledger.py|Recovering tree from hash store of size 4
node2_1 | 2022-03-18 14:33:25,034|INFO|ledger.py|Recovered tree in 0.0033799080000562753 seconds
node3_1 | 2022-03-18 14:33:25,037|INFO|ledger.py|Recovering tree from hash store of size 5
node3_1 | 2022-03-18 14:33:25,039|INFO|ledger.py|Recovered tree in 0.001780528000040249 seconds
node3_1 | 2022-03-18 14:33:25,126|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:25,126|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:25,159|INFO|ledger.py|Recovering tree from hash store of size 5
node3_1 | 2022-03-18 14:33:25,159|INFO|ledger.py|Recovering tree from hash store of size 3
node3_1 | 2022-03-18 14:33:25,161|INFO|ledger.py|Recovered tree in 0.002268571000058728 seconds
node2_1 | 2022-03-18 14:33:25,161|INFO|ledger.py|Recovered tree in 0.001658441000017774 seconds
node4_1 | 2022-03-18 14:33:25,235|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node4
node4_1 | 2022-03-18 14:33:25,236|INFO|pool_manager.py|Node4 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node4_1 | 2022-03-18 14:33:25,237|INFO|pool_manager.py|Node4 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node4_1 | 2022-03-18 14:33:25,238|INFO|pool_manager.py|Node4 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node4_1 | 2022-03-18 14:33:25,238|INFO|pool_manager.py|Node4 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node1_1 | 2022-03-18 14:33:25,244|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node1
node1_1 | 2022-03-18 14:33:25,246|INFO|pool_manager.py|Node1 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node1_1 | 2022-03-18 14:33:25,246|INFO|pool_manager.py|Node1 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node1_1 | 2022-03-18 14:33:25,247|INFO|pool_manager.py|Node1 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node1_1 | 2022-03-18 14:33:25,248|INFO|pool_manager.py|Node1 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node2_1 | 2022-03-18 14:33:25,253|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:25,265|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:25,272|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:25,279|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:25,286|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:25,287|INFO|stacks.py|Node4C: clients connections tracking is enabled.
node4_1 | 2022-03-18 14:33:25,288|INFO|stacks.py|Node4C: client stack restart is enabled.
node2_1 | 2022-03-18 14:33:25,291|INFO|ledger.py|Recovering tree from hash store of size 3
node2_1 | 2022-03-18 14:33:25,292|INFO|ledger.py|Recovered tree in 0.0011617920000617232 seconds
node1_1 | 2022-03-18 14:33:25,299|INFO|stacks.py|Node1C: clients connections tracking is enabled.
node1_1 | 2022-03-18 14:33:25,300|INFO|stacks.py|Node1C: client stack restart is enabled.
node3_1 | 2022-03-18 14:33:25,369|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node3
node3_1 | 2022-03-18 14:33:25,370|INFO|pool_manager.py|Node3 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node3_1 | 2022-03-18 14:33:25,370|INFO|pool_manager.py|Node3 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node3_1 | 2022-03-18 14:33:25,370|INFO|pool_manager.py|Node3 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node3_1 | 2022-03-18 14:33:25,370|INFO|pool_manager.py|Node3 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node3_1 | 2022-03-18 14:33:25,380|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2022-03-18 14:33:25,384|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:25,387|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node4_1 | 2022-03-18 14:33:25,387|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,387|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,387|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,388|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,388|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,388|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node4_1 | 2022-03-18 14:33:25,388|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,389|INFO|node.py|Node1 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node1', 'Node2', 'Node3', 'Node4'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node1_1 | 2022-03-18 14:33:25,390|INFO|consensus_shared_data.py|Node1:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2022-03-18 14:33:25,389|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node4_1 | 2022-03-18 14:33:25,390|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node4_1 | 2022-03-18 14:33:25,390|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node4_1 | 2022-03-18 14:33:25,391|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node3_1 | 2022-03-18 14:33:25,390|INFO|stacks.py|Node3C: clients connections tracking is enabled.
node3_1 | 2022-03-18 14:33:25,391|INFO|stacks.py|Node3C: client stack restart is enabled.
node4_1 | 2022-03-18 14:33:25,392|INFO|node.py|Node4 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node4', 'Node2', 'Node1', 'Node3'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node4_1 | 2022-03-18 14:33:25,393|INFO|consensus_shared_data.py|Node4:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2022-03-18 14:33:25,397|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:33:25,397|NOTIFICATION|replicas.py|Node4 added replica Node4:0 to instance 0 (master)
node4_1 | 2022-03-18 14:33:25,397|INFO|replicas.py|reset monitor due to replica addition
node4_1 | 2022-03-18 14:33:25,398|INFO|consensus_shared_data.py|Node4:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node1_1 | 2022-03-18 14:33:25,397|INFO|primary_connection_monitor_service.py|Node1:0 scheduling primary connection check in 60 sec
node1_1 | 2022-03-18 14:33:25,397|NOTIFICATION|replicas.py|Node1 added replica Node1:0 to instance 0 (master)
node1_1 | 2022-03-18 14:33:25,397|INFO|replicas.py|reset monitor due to replica addition
node1_1 | 2022-03-18 14:33:25,398|INFO|consensus_shared_data.py|Node1:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2022-03-18 14:33:25,402|NOTIFICATION|replicas.py|Node4 added replica Node4:1 to instance 1 (backup)
node4_1 | 2022-03-18 14:33:25,402|INFO|replicas.py|reset monitor due to replica addition
node4_1 | 2022-03-18 14:33:25,402|INFO|consensus_shared_data.py|Node4:0 updated validators list to {'Node4', 'Node2', 'Node1', 'Node3'}
node4_1 | 2022-03-18 14:33:25,403|INFO|consensus_shared_data.py|Node4:1 updated validators list to {'Node4', 'Node2', 'Node1', 'Node3'}
node4_1 | 2022-03-18 14:33:25,403|INFO|replica.py|Node4:0 setting primaryName for view no 0 to: Node1:0
node4_1 | 2022-03-18 14:33:25,403|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 lost connection to primary of master
node4_1 | 2022-03-18 14:33:25,404|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:33:25,404|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node4_1 | 2022-03-18 14:33:25,404|INFO|replica.py|Node4:1 setting primaryName for view no 0 to: Node2:1
node4_1 | 2022-03-18 14:33:25,405|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node1_1 | 2022-03-18 14:33:25,403|NOTIFICATION|replicas.py|Node1 added replica Node1:1 to instance 1 (backup)
node1_1 | 2022-03-18 14:33:25,404|INFO|replicas.py|reset monitor due to replica addition
node1_1 | 2022-03-18 14:33:25,404|INFO|consensus_shared_data.py|Node1:0 updated validators list to {'Node1', 'Node2', 'Node3', 'Node4'}
node1_1 | 2022-03-18 14:33:25,405|INFO|consensus_shared_data.py|Node1:1 updated validators list to {'Node1', 'Node2', 'Node3', 'Node4'}
node1_1 | 2022-03-18 14:33:25,405|INFO|replica.py|Node1:0 setting primaryName for view no 0 to: Node1:0
node1_1 | 2022-03-18 14:33:25,406|NOTIFICATION|primary_connection_monitor_service.py|Node1:0 restored connection to primary of master
node4_1 | 2022-03-18 14:33:25,406|INFO|node.py|total plugins loaded in node: 0
node1_1 | 2022-03-18 14:33:25,406|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node1_1 | 2022-03-18 14:33:25,406|INFO|replica.py|Node1:1 setting primaryName for view no 0 to: Node2:1
node1_1 | 2022-03-18 14:33:25,406|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node1_1 | 2022-03-18 14:33:25,407|INFO|node.py|total plugins loaded in node: 0
node4_1 | 2022-03-18 14:33:25,409|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f02861dc588> initialized state for ledger 0: state root DL6ciHhstqGaySZY9cgWcvv6BB4bCzLCviNygF22gNZo
node4_1 | 2022-03-18 14:33:25,410|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f02861dc588> found state to be empty, recreating from ledger 2
node1_1 | 2022-03-18 14:33:25,410|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f33d7f4e588> initialized state for ledger 0: state root DL6ciHhstqGaySZY9cgWcvv6BB4bCzLCviNygF22gNZo
node1_1 | 2022-03-18 14:33:25,410|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f33d7f4e588> found state to be empty, recreating from ledger 2
node4_1 | 2022-03-18 14:33:25,411|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f02861dc588> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node4_1 | 2022-03-18 14:33:25,411|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f02861dc588> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node4_1 | 2022-03-18 14:33:25,412|INFO|motor.py|Node4 changing status from stopped to starting
node1_1 | 2022-03-18 14:33:25,411|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f33d7f4e588> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node1_1 | 2022-03-18 14:33:25,413|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f33d7f4e588> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node1_1 | 2022-03-18 14:33:25,413|INFO|motor.py|Node1 changing status from stopped to starting
node4_1 | 2022-03-18 14:33:25,415|INFO|zstack.py|Node4 will bind its listener at 0.0.0.0:9707
node4_1 | 2022-03-18 14:33:25,416|INFO|stacks.py|CONNECTION: Node4 listening for other nodes at 0.0.0.0:9707
node1_1 | 2022-03-18 14:33:25,417|INFO|zstack.py|Node1 will bind its listener at 0.0.0.0:9701
node4_1 | 2022-03-18 14:33:25,419|INFO|zstack.py|Node4C will bind its listener at 0.0.0.0:9708
node1_1 | 2022-03-18 14:33:25,421|INFO|stacks.py|CONNECTION: Node1 listening for other nodes at 0.0.0.0:9701
node1_1 | 2022-03-18 14:33:25,423|INFO|zstack.py|Node1C will bind its listener at 0.0.0.0:9702
node2_1 | 2022-03-18 14:33:25,434|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node2
node2_1 | 2022-03-18 14:33:25,435|INFO|pool_manager.py|Node2 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node2_1 | 2022-03-18 14:33:25,435|INFO|pool_manager.py|Node2 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node2_1 | 2022-03-18 14:33:25,435|INFO|pool_manager.py|Node2 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node2_1 | 2022-03-18 14:33:25,435|INFO|pool_manager.py|Node2 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node2_1 | 2022-03-18 14:33:25,448|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:25,448|INFO|node.py|Node4 first time running...
node4_1 | 2022-03-18 14:33:25,449|INFO|node.py|Node4 processed 0 Ordered batches for instance 0 before starting catch up
node4_1 | 2022-03-18 14:33:25,450|INFO|node.py|Node4 processed 0 Ordered batches for instance 1 before starting catch up
node2_1 | 2022-03-18 14:33:25,450|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:25,450|INFO|ordering_service.py|Node4:0 reverted 0 batches before starting catch up
node4_1 | 2022-03-18 14:33:25,451|INFO|node_leecher_service.py|Node4:NodeLeecherService starting catchup (is_initial=True)
node4_1 | 2022-03-18 14:33:25,451|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from Idle to PreSyncingPool
node4_1 | 2022-03-18 14:33:25,452|INFO|cons_proof_service.py|Node4:ConsProofService:0 starts
node4_1 | 2022-03-18 14:33:25,455|INFO|kit_zstack.py|CONNECTION: Node4 found the following missing connections: Node2, Node1, Node3
node2_1 | 2022-03-18 14:33:25,456|INFO|stacks.py|Node2C: clients connections tracking is enabled.
node4_1 | 2022-03-18 14:33:25,456|INFO|zstack.py|CONNECTION: Node4 looking for Node2 at 172.21.245.197:9703
node1_1 | 2022-03-18 14:33:25,456|INFO|node.py|Node1 first time running...
node1_1 | 2022-03-18 14:33:25,457|INFO|node.py|Node1 processed 0 Ordered batches for instance 0 before starting catch up
node2_1 | 2022-03-18 14:33:25,458|INFO|stacks.py|Node2C: client stack restart is enabled.
node1_1 | 2022-03-18 14:33:25,457|INFO|node.py|Node1 processed 0 Ordered batches for instance 1 before starting catch up
node1_1 | 2022-03-18 14:33:25,458|INFO|ordering_service.py|Node1:0 reverted 0 batches before starting catch up
node1_1 | 2022-03-18 14:33:25,458|INFO|node_leecher_service.py|Node1:NodeLeecherService starting catchup (is_initial=True)
node1_1 | 2022-03-18 14:33:25,459|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from Idle to PreSyncingPool
node1_1 | 2022-03-18 14:33:25,460|INFO|cons_proof_service.py|Node1:ConsProofService:0 starts
node4_1 | 2022-03-18 14:33:25,463|INFO|zstack.py|CONNECTION: Node4 looking for Node1 at 172.21.245.197:9701
node1_1 | 2022-03-18 14:33:25,464|INFO|kit_zstack.py|CONNECTION: Node1 found the following missing connections: Node2, Node3, Node4
node4_1 | 2022-03-18 14:33:25,464|INFO|zstack.py|CONNECTION: Node4 looking for Node3 at 172.21.245.197:9705
node1_1 | 2022-03-18 14:33:25,466|INFO|zstack.py|CONNECTION: Node1 looking for Node2 at 172.21.245.197:9703
node1_1 | 2022-03-18 14:33:25,470|INFO|zstack.py|CONNECTION: Node1 looking for Node3 at 172.21.245.197:9705
node1_1 | 2022-03-18 14:33:25,472|INFO|zstack.py|CONNECTION: Node1 looking for Node4 at 172.21.245.197:9707
node3_1 | 2022-03-18 14:33:25,487|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,487|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,487|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,488|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,488|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,488|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node3_1 | 2022-03-18 14:33:25,489|INFO|node.py|Node3 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node1', 'Node4', 'Node3', 'Node2'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node3_1 | 2022-03-18 14:33:25,490|INFO|consensus_shared_data.py|Node3:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node3_1 | 2022-03-18 14:33:25,494|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:33:25,494|NOTIFICATION|replicas.py|Node3 added replica Node3:0 to instance 0 (master)
node3_1 | 2022-03-18 14:33:25,495|INFO|replicas.py|reset monitor due to replica addition
node3_1 | 2022-03-18 14:33:25,496|INFO|consensus_shared_data.py|Node3:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node3_1 | 2022-03-18 14:33:25,498|NOTIFICATION|replicas.py|Node3 added replica Node3:1 to instance 1 (backup)
node3_1 | 2022-03-18 14:33:25,498|INFO|replicas.py|reset monitor due to replica addition
node3_1 | 2022-03-18 14:33:25,499|INFO|consensus_shared_data.py|Node3:0 updated validators list to {'Node1', 'Node4', 'Node3', 'Node2'}
node3_1 | 2022-03-18 14:33:25,499|INFO|consensus_shared_data.py|Node3:1 updated validators list to {'Node1', 'Node4', 'Node3', 'Node2'}
node3_1 | 2022-03-18 14:33:25,499|INFO|replica.py|Node3:0 setting primaryName for view no 0 to: Node1:0
node3_1 | 2022-03-18 14:33:25,500|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 lost connection to primary of master
node3_1 | 2022-03-18 14:33:25,500|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:33:25,500|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node3_1 | 2022-03-18 14:33:25,500|INFO|replica.py|Node3:1 setting primaryName for view no 0 to: Node2:1
node3_1 | 2022-03-18 14:33:25,500|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node3_1 | 2022-03-18 14:33:25,501|INFO|node.py|total plugins loaded in node: 0
node3_1 | 2022-03-18 14:33:25,503|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f8497985588> initialized state for ledger 0: state root DL6ciHhstqGaySZY9cgWcvv6BB4bCzLCviNygF22gNZo
node3_1 | 2022-03-18 14:33:25,503|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f8497985588> found state to be empty, recreating from ledger 2
node3_1 | 2022-03-18 14:33:25,504|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f8497985588> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node3_1 | 2022-03-18 14:33:25,504|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f8497985588> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node3_1 | 2022-03-18 14:33:25,504|INFO|motor.py|Node3 changing status from stopped to starting
node3_1 | 2022-03-18 14:33:25,506|INFO|zstack.py|Node3 will bind its listener at 0.0.0.0:9705
node3_1 | 2022-03-18 14:33:25,507|INFO|stacks.py|CONNECTION: Node3 listening for other nodes at 0.0.0.0:9705
node3_1 | 2022-03-18 14:33:25,508|INFO|zstack.py|Node3C will bind its listener at 0.0.0.0:9706
node3_1 | 2022-03-18 14:33:25,524|INFO|node.py|Node3 first time running...
node3_1 | 2022-03-18 14:33:25,524|INFO|node.py|Node3 processed 0 Ordered batches for instance 0 before starting catch up
node3_1 | 2022-03-18 14:33:25,525|INFO|node.py|Node3 processed 0 Ordered batches for instance 1 before starting catch up
node3_1 | 2022-03-18 14:33:25,526|INFO|ordering_service.py|Node3:0 reverted 0 batches before starting catch up
node3_1 | 2022-03-18 14:33:25,527|INFO|node_leecher_service.py|Node3:NodeLeecherService starting catchup (is_initial=True)
node3_1 | 2022-03-18 14:33:25,527|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from Idle to PreSyncingPool
node3_1 | 2022-03-18 14:33:25,528|INFO|cons_proof_service.py|Node3:ConsProofService:0 starts
node3_1 | 2022-03-18 14:33:25,530|INFO|kit_zstack.py|CONNECTION: Node3 found the following missing connections: Node1, Node4, Node2
node3_1 | 2022-03-18 14:33:25,531|INFO|zstack.py|CONNECTION: Node3 looking for Node1 at 172.21.245.197:9701
node3_1 | 2022-03-18 14:33:25,533|INFO|zstack.py|CONNECTION: Node3 looking for Node4 at 172.21.245.197:9707
node3_1 | 2022-03-18 14:33:25,534|INFO|zstack.py|CONNECTION: Node3 looking for Node2 at 172.21.245.197:9703
node2_1 | 2022-03-18 14:33:25,537|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node2_1 | 2022-03-18 14:33:25,539|INFO|node.py|Node2 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node1', 'Node4', 'Node3', 'Node2'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node2_1 | 2022-03-18 14:33:25,540|INFO|consensus_shared_data.py|Node2:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node2_1 | 2022-03-18 14:33:25,543|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:33:25,543|NOTIFICATION|replicas.py|Node2 added replica Node2:0 to instance 0 (master)
node2_1 | 2022-03-18 14:33:25,543|INFO|replicas.py|reset monitor due to replica addition
node2_1 | 2022-03-18 14:33:25,544|INFO|consensus_shared_data.py|Node2:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node2_1 | 2022-03-18 14:33:25,546|NOTIFICATION|replicas.py|Node2 added replica Node2:1 to instance 1 (backup)
node2_1 | 2022-03-18 14:33:25,546|INFO|replicas.py|reset monitor due to replica addition
node2_1 | 2022-03-18 14:33:25,547|INFO|consensus_shared_data.py|Node2:0 updated validators list to {'Node1', 'Node4', 'Node3', 'Node2'}
node2_1 | 2022-03-18 14:33:25,547|INFO|consensus_shared_data.py|Node2:1 updated validators list to {'Node1', 'Node4', 'Node3', 'Node2'}
node2_1 | 2022-03-18 14:33:25,547|INFO|replica.py|Node2:0 setting primaryName for view no 0 to: Node1:0
node2_1 | 2022-03-18 14:33:25,547|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 lost connection to primary of master
node2_1 | 2022-03-18 14:33:25,548|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:33:25,548|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node2_1 | 2022-03-18 14:33:25,548|INFO|replica.py|Node2:1 setting primaryName for view no 0 to: Node2:1
node2_1 | 2022-03-18 14:33:25,548|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node2_1 | 2022-03-18 14:33:25,549|INFO|node.py|total plugins loaded in node: 0
node2_1 | 2022-03-18 14:33:25,551|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f80975b8588> initialized state for ledger 0: state root DL6ciHhstqGaySZY9cgWcvv6BB4bCzLCviNygF22gNZo
node2_1 | 2022-03-18 14:33:25,551|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f80975b8588> found state to be empty, recreating from ledger 2
node2_1 | 2022-03-18 14:33:25,552|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f80975b8588> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node2_1 | 2022-03-18 14:33:25,552|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f80975b8588> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node2_1 | 2022-03-18 14:33:25,552|INFO|motor.py|Node2 changing status from stopped to starting
node2_1 | 2022-03-18 14:33:25,554|INFO|zstack.py|Node2 will bind its listener at 0.0.0.0:9703
node2_1 | 2022-03-18 14:33:25,554|INFO|stacks.py|CONNECTION: Node2 listening for other nodes at 0.0.0.0:9703
node2_1 | 2022-03-18 14:33:25,555|INFO|zstack.py|Node2C will bind its listener at 0.0.0.0:9704
node2_1 | 2022-03-18 14:33:25,570|INFO|node.py|Node2 first time running...
node2_1 | 2022-03-18 14:33:25,571|INFO|node.py|Node2 processed 0 Ordered batches for instance 0 before starting catch up
node2_1 | 2022-03-18 14:33:25,571|INFO|node.py|Node2 processed 0 Ordered batches for instance 1 before starting catch up
node2_1 | 2022-03-18 14:33:25,571|INFO|ordering_service.py|Node2:0 reverted 0 batches before starting catch up
node2_1 | 2022-03-18 14:33:25,572|INFO|node_leecher_service.py|Node2:NodeLeecherService starting catchup (is_initial=True)
node2_1 | 2022-03-18 14:33:25,572|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from Idle to PreSyncingPool
node2_1 | 2022-03-18 14:33:25,572|INFO|cons_proof_service.py|Node2:ConsProofService:0 starts
node2_1 | 2022-03-18 14:33:25,575|INFO|kit_zstack.py|CONNECTION: Node2 found the following missing connections: Node1, Node4, Node3
node2_1 | 2022-03-18 14:33:25,575|INFO|zstack.py|CONNECTION: Node2 looking for Node1 at 172.21.245.197:9701
node2_1 | 2022-03-18 14:33:25,577|INFO|zstack.py|CONNECTION: Node2 looking for Node4 at 172.21.245.197:9707
node2_1 | 2022-03-18 14:33:25,578|INFO|zstack.py|CONNECTION: Node2 looking for Node3 at 172.21.245.197:9705
node1_1 | 2022-03-18 14:33:40,461|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:33:40,461|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:33:40,537|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:33:40,576|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:33:55,469|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:33:55,476|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:33:55,541|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:33:55,592|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:34:10,474|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:34:10,477|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:34:10,557|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:34:10,596|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
webserver_1 | 2022-03-18 14:34:24,526|WARNING|libindy.py|_indy_loop_callback: Function returned error
webserver_1 | 2022-03-18 14:34:24,534|ERROR|anchor.py|Initialization error:
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/home/indy/server/anchor.py", line 221, in _open_pool
webserver_1 | self._pool = await pool.open_pool_ledger(pool_name, json.dumps(pool_cfg))
webserver_1 | File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/indy/pool.py", line 88, in open_pool_ledger
webserver_1 | open_pool_ledger.cb)
webserver_1 | indy.error.PoolLedgerTimeout
webserver_1 |
webserver_1 | The above exception was the direct cause of the following exception:
webserver_1 |
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/home/indy/server/anchor.py", line 317, in open
webserver_1 | await self._open_pool()
webserver_1 | File "/home/indy/server/anchor.py", line 223, in _open_pool
webserver_1 | raise AnchorException("Error opening pool ledger connection") from e
webserver_1 | server.anchor.AnchorException: Error opening pool ledger connection
webserver_1 | 2022-03-18 14:34:24,537|INFO|server.py|--- Trust anchor initialized ---
node4_1 | 2022-03-18 14:34:25,400|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2022-03-18 14:34:25,401|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2022-03-18 14:34:25,401|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:34:25,411|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2022-03-18 14:34:25,412|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2022-03-18 14:34:25,413|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node1_1 | 2022-03-18 14:34:25,488|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:34:25,493|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:34:25,497|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node3_1 | 2022-03-18 14:34:25,498|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2022-03-18 14:34:25,499|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:34:25,500|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node3_1 | 2022-03-18 14:34:25,501|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2022-03-18 14:34:25,502|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:34:25,548|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2022-03-18 14:34:25,548|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2022-03-18 14:34:25,549|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:34:25,549|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2022-03-18 14:34:25,549|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2022-03-18 14:34:25,550|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:34:25,571|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:34:25,604|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:34:40,495|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:34:40,508|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:34:40,572|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:34:40,616|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:34:55,498|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:34:55,517|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:34:55,574|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:34:55,630|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:35:10,500|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:35:10,530|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:35:10,583|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:10,645|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:35:25,402|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2022-03-18 14:35:25,403|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2022-03-18 14:35:25,403|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:35:25,417|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2022-03-18 14:35:25,417|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2022-03-18 14:35:25,417|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node1_1 | 2022-03-18 14:35:25,507|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:35:25,509|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node3_1 | 2022-03-18 14:35:25,509|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2022-03-18 14:35:25,509|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:35:25,509|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node3_1 | 2022-03-18 14:35:25,510|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2022-03-18 14:35:25,510|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:35:25,535|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:25,552|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2022-03-18 14:35:25,553|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2022-03-18 14:35:25,553|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:35:25,553|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2022-03-18 14:35:25,554|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2022-03-18 14:35:25,554|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:35:25,590|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:25,653|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:35:40,510|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:35:40,547|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:35:40,593|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:40,659|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:35:55,515|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:35:55,555|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:35:55,599|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:55,674|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:36:10,526|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:36:10,560|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:36:10,605|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:36:10,689|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
...
// Nodes failed to connect to each other.
Has someone encountered this before?
The text was updated successfully, but these errors were encountered: