Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nodes cannot connect to each other after restart #199

Closed
vpapanchev opened this issue Mar 18, 2022 · 10 comments
Closed

Nodes cannot connect to each other after restart #199

vpapanchev opened this issue Mar 18, 2022 · 10 comments

Comments

@vpapanchev
Copy link

vpapanchev commented Mar 18, 2022

Hello,
I am getting some really strange behavior. I am running the VON Network locally on my personal computer - dual boot Ubuntu 20.04 and Windows 10. I am running the network on the Ubuntu. Because I switch between the two OSes regularly I need to stop and resume the VON Network on a daily basis. I noticed that sometimes the network wouldn't restart properly. This is how I reproduce the error:

Fresh start after cleaning up the Docker containers:

./manage build
./manage start --logs

node1_1      | Ledger does not exist - Creating...
...
node1_1      | ipAddresses: 172.21.245.197,172.21.245.197,172.21.245.197,172.21.245.197
...
node2_1      | Ledger does not exist - Creating...
...
node2_1      | ipAddresses: 172.21.245.197,172.21.245.197,172.21.245.197,172.21.245.197
...
// Same for the other nodes

// Starts of nodes:
node2_1      | ================================================================================================
node2_1      | start_indy_node Node2 0.0.0.0 9703 0.0.0.0 9704
...
node3_1      | ================================================================================================
node3_1      | 
node3_1      | start_indy_node Node3 0.0.0.0 9705 0.0.0.0 9706
...
// Nodes listening for other nodes:
node4_1      | 2022-03-17 16:55:53,849|INFO|stacks.py|CONNECTION: Node4 listening for other nodes at 0.0.0.0:9707
node1_1      | 2022-03-17 16:55:53,863|INFO|stacks.py|CONNECTION: Node1 listening for other nodes at 0.0.0.0:9701
...
// Nodes looking for other nodes:
node1_1      | 2022-03-17 16:55:53,919|INFO|kit_zstack.py|CONNECTION: Node1 found the following missing connections: Node2, Node4, Node3
node1_1      | 2022-03-17 16:55:53,920|INFO|zstack.py|CONNECTION: Node1 looking for Node2 at 172.21.245.197:9703
node1_1      | 2022-03-17 16:55:53,926|INFO|zstack.py|CONNECTION: Node1 looking for Node4 at 172.21.245.197:9707
node1_1      | 2022-03-17 16:55:53,927|INFO|zstack.py|CONNECTION: Node1 looking for Node3 at 172.21.245.197:9705
...
node3_1      | 2022-03-17 16:55:54,014|INFO|kit_zstack.py|CONNECTION: Node3 found the following missing connections: Node4, Node2, Node1
node3_1      | 2022-03-17 16:55:54,015|INFO|zstack.py|CONNECTION: Node3 looking for Node4 at 172.21.245.197:9707
node3_1      | 2022-03-17 16:55:54,018|INFO|zstack.py|CONNECTION: Node3 looking for Node2 at 172.21.245.197:9703
node3_1      | 2022-03-17 16:55:54,018|INFO|zstack.py|CONNECTION: Node3 looking for Node1 at 172.21.245.197:9701
...

// Nodes successfully connect to each other.
// Everything works fine.

Now I restart the network:

./manage stop
./manage start

node4_1      | 2022-03-17 17:05:06,503|INFO|motor.py|Node4 changing status from stopped to starting
node4_1      | 2022-03-17 17:05:06,505|INFO|zstack.py|Node4 will bind its listener at 0.0.0.0:9707
node4_1      | 2022-03-17 17:05:06,506|INFO|stacks.py|CONNECTION: Node4 listening for other nodes at 0.0.0.0:9707
node4_1      | 2022-03-17 17:05:06,507|INFO|zstack.py|Node4C will bind its listener at 0.0.0.0:9708
node4_1      | 2022-03-17 17:05:06,546|INFO|zstack.py|CONNECTION: Node4 looking for Node1 at 172.21.245.197:9701
...
node4_1      | 2022-03-17 17:05:06,619|NOTIFICATION|keep_in_touch.py|CONNECTION: Node4 now connected to Node1
...
// Everything works just fine. Nodes connect to each successfully again.

After that I stopped the network once again and I switched on Windows 10 and continued working there.
The next day: I start the Ubuntu and try to resume the network again. Something goes wrong and the nodes cannot connect to each other. The Webserver also gives an error, but this is because the nodes weren't ready when the server fired up.

Logs are very long. Click to expand.

./manage start --logs

Using: docker-compose --log-level ERROR

Recreating von_node3_1 ... done
Recreating von_node4_1 ... done
Recreating von_node2_1 ... done
Recreating von_node1_1 ... done
Recreating von_webserver_1 ... done
Attaching to von_node1_1, von_node2_1, von_node3_1, von_webserver_1, von_node4_1
node3_1 | start_indy_node Node3 0.0.0.0 9705 0.0.0.0 9706
node1_1 | start_indy_node Node1 0.0.0.0 9701 0.0.0.0 9702
node2_1 | start_indy_node Node2 0.0.0.0 9703 0.0.0.0 9704
node4_1 | start_indy_node Node4 0.0.0.0 9707 0.0.0.0 9708
node4_1 | 2022-03-18 14:33:24,081|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:24,134|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2022-03-18 14:33:24,174|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node2_1 | 2022-03-18 14:33:24,342|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:24,352|INFO|notifier_plugin_manager.py|Found notifier plugins: []
webserver_1 | 2022-03-18 14:33:24,351|INFO|server.py|REGISTER_NEW_DIDS is set to True
node1_1 | 2022-03-18 14:33:24,358|INFO|notifier_plugin_manager.py|Found notifier plugins: []
webserver_1 | 2022-03-18 14:33:24,356|INFO|server.py|LEDGER_INSTANCE_NAME is set to "localhost"
webserver_1 | 2022-03-18 14:33:24,387|INFO|server.py|Web analytics are DISABLED
webserver_1 | 2022-03-18 14:33:24,390|INFO|server.py|Running webserver...
webserver_1 | 2022-03-18 14:33:24,390|DEBUG|selector_events.py|Using selector: EpollSelector
webserver_1 | 2022-03-18 14:33:24,391|INFO|server.py|Creating trust anchor...
webserver_1 | 2022-03-18 14:33:24,392|INFO|anchor.py|Ledger cache will be stored in :memory:
webserver_1 | 2022-03-18 14:33:24,407|DEBUG|core.py|executing functools.partial(<function connect..connector at 0x7ffa239c9ae8>)
webserver_1 | ======== Running on http://0.0.0.0:8000 ========
webserver_1 | (Press CTRL+C to quit)
webserver_1 | 2022-03-18 14:33:24,413|DEBUG|core.py|returning <sqlite3.Connection object at 0x7ffa2980f490>
webserver_1 | 2022-03-18 14:33:24,415|INFO|anchor.py|Initializing transaction database
webserver_1 | 2022-03-18 14:33:24,427|DEBUG|core.py|executing functools.partial(<built-in method executescript of sqlite3.Connection object at 0x7ffa2980f490>, '\n CREATE TABLE existent (\n ledger integer PRIMARY KEY,\n seqno integer NOT NULL DEFAULT 0\n );\n CREATE TABLE latest (\n ledger integer PRIMARY KEY,\n seqno integer NOT NULL DEFAULT 0\n );\n CREATE TABLE transactions (\n ledger integer NOT NULL,\n seqno integer NOT NULL,\n txntype integer NOT NULL,\n termsid integer,\n txnid text,\n added timestamp,\n value text,\n PRIMARY KEY (ledger, seqno)\n );\n CREATE INDEX txn_id ON transactions (txnid);\n CREATE VIRTUAL TABLE terms USING\n fts3(txnid, sender, ident, alias, verkey, short_verkey, data);\n ')
webserver_1 | 2022-03-18 14:33:24,429|DEBUG|core.py|returning <sqlite3.Cursor object at 0x7ffa251c9f10>
webserver_1 | 2022-03-18 14:33:24,433|DEBUG|core.py|executing functools.partial(<built-in method close of sqlite3.Cursor object at 0x7ffa251c9f10>)
webserver_1 | 2022-03-18 14:33:24,434|DEBUG|core.py|returning None
webserver_1 | 2022-03-18 14:33:24,437|DEBUG|pool.py|set_protocol_version: >>> protocol_version: 2
webserver_1 | 2022-03-18 14:33:24,440|DEBUG|pool.py|set_protocol_version: Creating callback
webserver_1 | 2022-03-18 14:33:24,467|DEBUG|pool.py|set_protocol_version: <<< res: None
webserver_1 | 2022-03-18 14:33:24,467|DEBUG|pool.py|list_pools: >>>
webserver_1 | 2022-03-18 14:33:24,467|DEBUG|pool.py|list_pools: Creating callback
webserver_1 | 2022-03-18 14:33:24,470|DEBUG|pool.py|list_pools: <<< res: []
webserver_1 | 2022-03-18 14:33:24,472|INFO|anchor.py|Genesis file already exists: /home/indy/ledger/sandbox/pool_transactions_genesis
webserver_1 | 2022-03-18 14:33:24,472|DEBUG|pool.py|create_pool_ledger_config: >>> config_name: 'nodepool', config: '{"genesis_txn": "/home/indy/ledger/sandbox/pool_transactions_genesis"}'
webserver_1 | 2022-03-18 14:33:24,472|DEBUG|pool.py|create_pool_ledger_config: Creating callback
webserver_1 | 2022-03-18 14:33:24,491|DEBUG|pool.py|create_pool_ledger_config: <<< res: None
webserver_1 | 2022-03-18 14:33:24,491|DEBUG|pool.py|open_pool_ledger: >>> config_name: 'nodepool', config: '{}'
webserver_1 | 2022-03-18 14:33:24,491|DEBUG|pool.py|open_pool_ledger: Creating callback
node3_1 | 2022-03-18 14:33:24,498|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:24,510|INFO|looper.py|Starting up indy-node
node4_1 | 2022-03-18 14:33:24,512|INFO|looper.py|Starting up indy-node
node4_1 | 2022-03-18 14:33:24,580|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,582|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:24,601|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2022-03-18 14:33:24,603|INFO|ledger.py|Recovering tree from transaction log
node2_1 | 2022-03-18 14:33:24,614|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2022-03-18 14:33:24,635|INFO|looper.py|Starting up indy-node
node4_1 | 2022-03-18 14:33:24,651|INFO|ledger.py|Recovered tree in 0.05008614900009434 seconds
node1_1 | 2022-03-18 14:33:24,654|INFO|ledger.py|Recovered tree in 0.05087215000003198 seconds
node3_1 | 2022-03-18 14:33:24,695|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:24,715|INFO|looper.py|Starting up indy-node
node3_1 | 2022-03-18 14:33:24,718|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2022-03-18 14:33:24,720|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:24,720|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,742|INFO|ledger.py|Recovering tree from hash store of size 4
node1_1 | 2022-03-18 14:33:24,742|INFO|ledger.py|Recovered tree in 0.0006725699998924028 seconds
node4_1 | 2022-03-18 14:33:24,743|INFO|ledger.py|Recovering tree from hash store of size 4
node4_1 | 2022-03-18 14:33:24,744|INFO|ledger.py|Recovered tree in 0.00045381200004612765 seconds
node2_1 | 2022-03-18 14:33:24,779|INFO|ledger.py|Starting ledger...
node3_1 | 2022-03-18 14:33:24,785|INFO|ledger.py|Recovered tree in 0.06723432699993737 seconds
node2_1 | 2022-03-18 14:33:24,809|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2022-03-18 14:33:24,826|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:24,830|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,860|INFO|ledger.py|Recovering tree from hash store of size 5
node1_1 | 2022-03-18 14:33:24,862|INFO|ledger.py|Recovered tree in 0.0013548139999102204 seconds
node4_1 | 2022-03-18 14:33:24,861|INFO|ledger.py|Recovering tree from hash store of size 5
node4_1 | 2022-03-18 14:33:24,863|INFO|ledger.py|Recovered tree in 0.0020688349999318234 seconds
node3_1 | 2022-03-18 14:33:24,878|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:24,889|INFO|ledger.py|Recovered tree in 0.07958233100009693 seconds
node3_1 | 2022-03-18 14:33:24,908|INFO|ledger.py|Recovering tree from hash store of size 4
node3_1 | 2022-03-18 14:33:24,909|INFO|ledger.py|Recovered tree in 0.0015737659999786047 seconds
node1_1 | 2022-03-18 14:33:24,953|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:24,963|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,990|INFO|ledger.py|Recovering tree from hash store of size 3
node2_1 | 2022-03-18 14:33:24,991|INFO|ledger.py|Starting ledger...
node1_1 | 2022-03-18 14:33:24,992|INFO|ledger.py|Recovered tree in 0.001595060999989073 seconds
node4_1 | 2022-03-18 14:33:24,994|INFO|ledger.py|Recovering tree from hash store of size 3
node4_1 | 2022-03-18 14:33:24,997|INFO|ledger.py|Recovered tree in 0.002958984000088094 seconds
node3_1 | 2022-03-18 14:33:25,006|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:25,030|INFO|ledger.py|Recovering tree from hash store of size 4
node2_1 | 2022-03-18 14:33:25,034|INFO|ledger.py|Recovered tree in 0.0033799080000562753 seconds
node3_1 | 2022-03-18 14:33:25,037|INFO|ledger.py|Recovering tree from hash store of size 5
node3_1 | 2022-03-18 14:33:25,039|INFO|ledger.py|Recovered tree in 0.001780528000040249 seconds
node3_1 | 2022-03-18 14:33:25,126|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:25,126|INFO|ledger.py|Starting ledger...
node2_1 | 2022-03-18 14:33:25,159|INFO|ledger.py|Recovering tree from hash store of size 5
node3_1 | 2022-03-18 14:33:25,159|INFO|ledger.py|Recovering tree from hash store of size 3
node3_1 | 2022-03-18 14:33:25,161|INFO|ledger.py|Recovered tree in 0.002268571000058728 seconds
node2_1 | 2022-03-18 14:33:25,161|INFO|ledger.py|Recovered tree in 0.001658441000017774 seconds
node4_1 | 2022-03-18 14:33:25,235|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node4
node4_1 | 2022-03-18 14:33:25,236|INFO|pool_manager.py|Node4 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node4_1 | 2022-03-18 14:33:25,237|INFO|pool_manager.py|Node4 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node4_1 | 2022-03-18 14:33:25,238|INFO|pool_manager.py|Node4 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node4_1 | 2022-03-18 14:33:25,238|INFO|pool_manager.py|Node4 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node1_1 | 2022-03-18 14:33:25,244|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node1
node1_1 | 2022-03-18 14:33:25,246|INFO|pool_manager.py|Node1 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node1_1 | 2022-03-18 14:33:25,246|INFO|pool_manager.py|Node1 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node1_1 | 2022-03-18 14:33:25,247|INFO|pool_manager.py|Node1 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node1_1 | 2022-03-18 14:33:25,248|INFO|pool_manager.py|Node1 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node2_1 | 2022-03-18 14:33:25,253|INFO|ledger.py|Starting ledger...
node4_1 | 2022-03-18 14:33:25,265|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:25,272|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:25,279|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:25,286|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:25,287|INFO|stacks.py|Node4C: clients connections tracking is enabled.
node4_1 | 2022-03-18 14:33:25,288|INFO|stacks.py|Node4C: client stack restart is enabled.
node2_1 | 2022-03-18 14:33:25,291|INFO|ledger.py|Recovering tree from hash store of size 3
node2_1 | 2022-03-18 14:33:25,292|INFO|ledger.py|Recovered tree in 0.0011617920000617232 seconds
node1_1 | 2022-03-18 14:33:25,299|INFO|stacks.py|Node1C: clients connections tracking is enabled.
node1_1 | 2022-03-18 14:33:25,300|INFO|stacks.py|Node1C: client stack restart is enabled.
node3_1 | 2022-03-18 14:33:25,369|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node3
node3_1 | 2022-03-18 14:33:25,370|INFO|pool_manager.py|Node3 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node3_1 | 2022-03-18 14:33:25,370|INFO|pool_manager.py|Node3 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node3_1 | 2022-03-18 14:33:25,370|INFO|pool_manager.py|Node3 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node3_1 | 2022-03-18 14:33:25,370|INFO|pool_manager.py|Node3 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node3_1 | 2022-03-18 14:33:25,380|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2022-03-18 14:33:25,384|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2022-03-18 14:33:25,387|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node4_1 | 2022-03-18 14:33:25,387|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,387|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,387|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,388|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,388|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,388|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node4_1 | 2022-03-18 14:33:25,388|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node1_1 | 2022-03-18 14:33:25,389|INFO|node.py|Node1 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node1', 'Node2', 'Node3', 'Node4'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node1_1 | 2022-03-18 14:33:25,390|INFO|consensus_shared_data.py|Node1:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2022-03-18 14:33:25,389|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node4_1 | 2022-03-18 14:33:25,390|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node4_1 | 2022-03-18 14:33:25,390|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node4_1 | 2022-03-18 14:33:25,391|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node3_1 | 2022-03-18 14:33:25,390|INFO|stacks.py|Node3C: clients connections tracking is enabled.
node3_1 | 2022-03-18 14:33:25,391|INFO|stacks.py|Node3C: client stack restart is enabled.
node4_1 | 2022-03-18 14:33:25,392|INFO|node.py|Node4 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node4', 'Node2', 'Node1', 'Node3'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node4_1 | 2022-03-18 14:33:25,393|INFO|consensus_shared_data.py|Node4:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2022-03-18 14:33:25,397|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:33:25,397|NOTIFICATION|replicas.py|Node4 added replica Node4:0 to instance 0 (master)
node4_1 | 2022-03-18 14:33:25,397|INFO|replicas.py|reset monitor due to replica addition
node4_1 | 2022-03-18 14:33:25,398|INFO|consensus_shared_data.py|Node4:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node1_1 | 2022-03-18 14:33:25,397|INFO|primary_connection_monitor_service.py|Node1:0 scheduling primary connection check in 60 sec
node1_1 | 2022-03-18 14:33:25,397|NOTIFICATION|replicas.py|Node1 added replica Node1:0 to instance 0 (master)
node1_1 | 2022-03-18 14:33:25,397|INFO|replicas.py|reset monitor due to replica addition
node1_1 | 2022-03-18 14:33:25,398|INFO|consensus_shared_data.py|Node1:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2022-03-18 14:33:25,402|NOTIFICATION|replicas.py|Node4 added replica Node4:1 to instance 1 (backup)
node4_1 | 2022-03-18 14:33:25,402|INFO|replicas.py|reset monitor due to replica addition
node4_1 | 2022-03-18 14:33:25,402|INFO|consensus_shared_data.py|Node4:0 updated validators list to {'Node4', 'Node2', 'Node1', 'Node3'}
node4_1 | 2022-03-18 14:33:25,403|INFO|consensus_shared_data.py|Node4:1 updated validators list to {'Node4', 'Node2', 'Node1', 'Node3'}
node4_1 | 2022-03-18 14:33:25,403|INFO|replica.py|Node4:0 setting primaryName for view no 0 to: Node1:0
node4_1 | 2022-03-18 14:33:25,403|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 lost connection to primary of master
node4_1 | 2022-03-18 14:33:25,404|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:33:25,404|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node4_1 | 2022-03-18 14:33:25,404|INFO|replica.py|Node4:1 setting primaryName for view no 0 to: Node2:1
node4_1 | 2022-03-18 14:33:25,405|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node1_1 | 2022-03-18 14:33:25,403|NOTIFICATION|replicas.py|Node1 added replica Node1:1 to instance 1 (backup)
node1_1 | 2022-03-18 14:33:25,404|INFO|replicas.py|reset monitor due to replica addition
node1_1 | 2022-03-18 14:33:25,404|INFO|consensus_shared_data.py|Node1:0 updated validators list to {'Node1', 'Node2', 'Node3', 'Node4'}
node1_1 | 2022-03-18 14:33:25,405|INFO|consensus_shared_data.py|Node1:1 updated validators list to {'Node1', 'Node2', 'Node3', 'Node4'}
node1_1 | 2022-03-18 14:33:25,405|INFO|replica.py|Node1:0 setting primaryName for view no 0 to: Node1:0
node1_1 | 2022-03-18 14:33:25,406|NOTIFICATION|primary_connection_monitor_service.py|Node1:0 restored connection to primary of master
node4_1 | 2022-03-18 14:33:25,406|INFO|node.py|total plugins loaded in node: 0
node1_1 | 2022-03-18 14:33:25,406|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node1_1 | 2022-03-18 14:33:25,406|INFO|replica.py|Node1:1 setting primaryName for view no 0 to: Node2:1
node1_1 | 2022-03-18 14:33:25,406|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node1_1 | 2022-03-18 14:33:25,407|INFO|node.py|total plugins loaded in node: 0
node4_1 | 2022-03-18 14:33:25,409|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f02861dc588> initialized state for ledger 0: state root DL6ciHhstqGaySZY9cgWcvv6BB4bCzLCviNygF22gNZo
node4_1 | 2022-03-18 14:33:25,410|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f02861dc588> found state to be empty, recreating from ledger 2
node1_1 | 2022-03-18 14:33:25,410|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f33d7f4e588> initialized state for ledger 0: state root DL6ciHhstqGaySZY9cgWcvv6BB4bCzLCviNygF22gNZo
node1_1 | 2022-03-18 14:33:25,410|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f33d7f4e588> found state to be empty, recreating from ledger 2
node4_1 | 2022-03-18 14:33:25,411|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f02861dc588> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node4_1 | 2022-03-18 14:33:25,411|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f02861dc588> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node4_1 | 2022-03-18 14:33:25,412|INFO|motor.py|Node4 changing status from stopped to starting
node1_1 | 2022-03-18 14:33:25,411|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f33d7f4e588> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node1_1 | 2022-03-18 14:33:25,413|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f33d7f4e588> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node1_1 | 2022-03-18 14:33:25,413|INFO|motor.py|Node1 changing status from stopped to starting
node4_1 | 2022-03-18 14:33:25,415|INFO|zstack.py|Node4 will bind its listener at 0.0.0.0:9707
node4_1 | 2022-03-18 14:33:25,416|INFO|stacks.py|CONNECTION: Node4 listening for other nodes at 0.0.0.0:9707
node1_1 | 2022-03-18 14:33:25,417|INFO|zstack.py|Node1 will bind its listener at 0.0.0.0:9701
node4_1 | 2022-03-18 14:33:25,419|INFO|zstack.py|Node4C will bind its listener at 0.0.0.0:9708
node1_1 | 2022-03-18 14:33:25,421|INFO|stacks.py|CONNECTION: Node1 listening for other nodes at 0.0.0.0:9701
node1_1 | 2022-03-18 14:33:25,423|INFO|zstack.py|Node1C will bind its listener at 0.0.0.0:9702
node2_1 | 2022-03-18 14:33:25,434|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node2
node2_1 | 2022-03-18 14:33:25,435|INFO|pool_manager.py|Node2 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node2_1 | 2022-03-18 14:33:25,435|INFO|pool_manager.py|Node2 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node2_1 | 2022-03-18 14:33:25,435|INFO|pool_manager.py|Node2 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node2_1 | 2022-03-18 14:33:25,435|INFO|pool_manager.py|Node2 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node2_1 | 2022-03-18 14:33:25,448|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:25,448|INFO|node.py|Node4 first time running...
node4_1 | 2022-03-18 14:33:25,449|INFO|node.py|Node4 processed 0 Ordered batches for instance 0 before starting catch up
node4_1 | 2022-03-18 14:33:25,450|INFO|node.py|Node4 processed 0 Ordered batches for instance 1 before starting catch up
node2_1 | 2022-03-18 14:33:25,450|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2022-03-18 14:33:25,450|INFO|ordering_service.py|Node4:0 reverted 0 batches before starting catch up
node4_1 | 2022-03-18 14:33:25,451|INFO|node_leecher_service.py|Node4:NodeLeecherService starting catchup (is_initial=True)
node4_1 | 2022-03-18 14:33:25,451|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from Idle to PreSyncingPool
node4_1 | 2022-03-18 14:33:25,452|INFO|cons_proof_service.py|Node4:ConsProofService:0 starts
node4_1 | 2022-03-18 14:33:25,455|INFO|kit_zstack.py|CONNECTION: Node4 found the following missing connections: Node2, Node1, Node3
node2_1 | 2022-03-18 14:33:25,456|INFO|stacks.py|Node2C: clients connections tracking is enabled.
node4_1 | 2022-03-18 14:33:25,456|INFO|zstack.py|CONNECTION: Node4 looking for Node2 at 172.21.245.197:9703
node1_1 | 2022-03-18 14:33:25,456|INFO|node.py|Node1 first time running...
node1_1 | 2022-03-18 14:33:25,457|INFO|node.py|Node1 processed 0 Ordered batches for instance 0 before starting catch up
node2_1 | 2022-03-18 14:33:25,458|INFO|stacks.py|Node2C: client stack restart is enabled.
node1_1 | 2022-03-18 14:33:25,457|INFO|node.py|Node1 processed 0 Ordered batches for instance 1 before starting catch up
node1_1 | 2022-03-18 14:33:25,458|INFO|ordering_service.py|Node1:0 reverted 0 batches before starting catch up
node1_1 | 2022-03-18 14:33:25,458|INFO|node_leecher_service.py|Node1:NodeLeecherService starting catchup (is_initial=True)
node1_1 | 2022-03-18 14:33:25,459|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from Idle to PreSyncingPool
node1_1 | 2022-03-18 14:33:25,460|INFO|cons_proof_service.py|Node1:ConsProofService:0 starts
node4_1 | 2022-03-18 14:33:25,463|INFO|zstack.py|CONNECTION: Node4 looking for Node1 at 172.21.245.197:9701
node1_1 | 2022-03-18 14:33:25,464|INFO|kit_zstack.py|CONNECTION: Node1 found the following missing connections: Node2, Node3, Node4
node4_1 | 2022-03-18 14:33:25,464|INFO|zstack.py|CONNECTION: Node4 looking for Node3 at 172.21.245.197:9705
node1_1 | 2022-03-18 14:33:25,466|INFO|zstack.py|CONNECTION: Node1 looking for Node2 at 172.21.245.197:9703
node1_1 | 2022-03-18 14:33:25,470|INFO|zstack.py|CONNECTION: Node1 looking for Node3 at 172.21.245.197:9705
node1_1 | 2022-03-18 14:33:25,472|INFO|zstack.py|CONNECTION: Node1 looking for Node4 at 172.21.245.197:9707
node3_1 | 2022-03-18 14:33:25,487|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,487|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,487|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,488|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,488|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node3_1 | 2022-03-18 14:33:25,488|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node3_1 | 2022-03-18 14:33:25,489|INFO|node.py|Node3 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node1', 'Node4', 'Node3', 'Node2'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node3_1 | 2022-03-18 14:33:25,490|INFO|consensus_shared_data.py|Node3:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node3_1 | 2022-03-18 14:33:25,494|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:33:25,494|NOTIFICATION|replicas.py|Node3 added replica Node3:0 to instance 0 (master)
node3_1 | 2022-03-18 14:33:25,495|INFO|replicas.py|reset monitor due to replica addition
node3_1 | 2022-03-18 14:33:25,496|INFO|consensus_shared_data.py|Node3:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node3_1 | 2022-03-18 14:33:25,498|NOTIFICATION|replicas.py|Node3 added replica Node3:1 to instance 1 (backup)
node3_1 | 2022-03-18 14:33:25,498|INFO|replicas.py|reset monitor due to replica addition
node3_1 | 2022-03-18 14:33:25,499|INFO|consensus_shared_data.py|Node3:0 updated validators list to {'Node1', 'Node4', 'Node3', 'Node2'}
node3_1 | 2022-03-18 14:33:25,499|INFO|consensus_shared_data.py|Node3:1 updated validators list to {'Node1', 'Node4', 'Node3', 'Node2'}
node3_1 | 2022-03-18 14:33:25,499|INFO|replica.py|Node3:0 setting primaryName for view no 0 to: Node1:0
node3_1 | 2022-03-18 14:33:25,500|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 lost connection to primary of master
node3_1 | 2022-03-18 14:33:25,500|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:33:25,500|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node3_1 | 2022-03-18 14:33:25,500|INFO|replica.py|Node3:1 setting primaryName for view no 0 to: Node2:1
node3_1 | 2022-03-18 14:33:25,500|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node3_1 | 2022-03-18 14:33:25,501|INFO|node.py|total plugins loaded in node: 0
node3_1 | 2022-03-18 14:33:25,503|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f8497985588> initialized state for ledger 0: state root DL6ciHhstqGaySZY9cgWcvv6BB4bCzLCviNygF22gNZo
node3_1 | 2022-03-18 14:33:25,503|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f8497985588> found state to be empty, recreating from ledger 2
node3_1 | 2022-03-18 14:33:25,504|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f8497985588> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node3_1 | 2022-03-18 14:33:25,504|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f8497985588> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node3_1 | 2022-03-18 14:33:25,504|INFO|motor.py|Node3 changing status from stopped to starting
node3_1 | 2022-03-18 14:33:25,506|INFO|zstack.py|Node3 will bind its listener at 0.0.0.0:9705
node3_1 | 2022-03-18 14:33:25,507|INFO|stacks.py|CONNECTION: Node3 listening for other nodes at 0.0.0.0:9705
node3_1 | 2022-03-18 14:33:25,508|INFO|zstack.py|Node3C will bind its listener at 0.0.0.0:9706
node3_1 | 2022-03-18 14:33:25,524|INFO|node.py|Node3 first time running...
node3_1 | 2022-03-18 14:33:25,524|INFO|node.py|Node3 processed 0 Ordered batches for instance 0 before starting catch up
node3_1 | 2022-03-18 14:33:25,525|INFO|node.py|Node3 processed 0 Ordered batches for instance 1 before starting catch up
node3_1 | 2022-03-18 14:33:25,526|INFO|ordering_service.py|Node3:0 reverted 0 batches before starting catch up
node3_1 | 2022-03-18 14:33:25,527|INFO|node_leecher_service.py|Node3:NodeLeecherService starting catchup (is_initial=True)
node3_1 | 2022-03-18 14:33:25,527|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from Idle to PreSyncingPool
node3_1 | 2022-03-18 14:33:25,528|INFO|cons_proof_service.py|Node3:ConsProofService:0 starts
node3_1 | 2022-03-18 14:33:25,530|INFO|kit_zstack.py|CONNECTION: Node3 found the following missing connections: Node1, Node4, Node2
node3_1 | 2022-03-18 14:33:25,531|INFO|zstack.py|CONNECTION: Node3 looking for Node1 at 172.21.245.197:9701
node3_1 | 2022-03-18 14:33:25,533|INFO|zstack.py|CONNECTION: Node3 looking for Node4 at 172.21.245.197:9707
node3_1 | 2022-03-18 14:33:25,534|INFO|zstack.py|CONNECTION: Node3 looking for Node2 at 172.21.245.197:9703
node2_1 | 2022-03-18 14:33:25,537|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node2_1 | 2022-03-18 14:33:25,538|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node2_1 | 2022-03-18 14:33:25,539|INFO|node.py|Node2 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node1', 'Node4', 'Node3', 'Node2'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node2_1 | 2022-03-18 14:33:25,540|INFO|consensus_shared_data.py|Node2:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node2_1 | 2022-03-18 14:33:25,543|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:33:25,543|NOTIFICATION|replicas.py|Node2 added replica Node2:0 to instance 0 (master)
node2_1 | 2022-03-18 14:33:25,543|INFO|replicas.py|reset monitor due to replica addition
node2_1 | 2022-03-18 14:33:25,544|INFO|consensus_shared_data.py|Node2:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node2_1 | 2022-03-18 14:33:25,546|NOTIFICATION|replicas.py|Node2 added replica Node2:1 to instance 1 (backup)
node2_1 | 2022-03-18 14:33:25,546|INFO|replicas.py|reset monitor due to replica addition
node2_1 | 2022-03-18 14:33:25,547|INFO|consensus_shared_data.py|Node2:0 updated validators list to {'Node1', 'Node4', 'Node3', 'Node2'}
node2_1 | 2022-03-18 14:33:25,547|INFO|consensus_shared_data.py|Node2:1 updated validators list to {'Node1', 'Node4', 'Node3', 'Node2'}
node2_1 | 2022-03-18 14:33:25,547|INFO|replica.py|Node2:0 setting primaryName for view no 0 to: Node1:0
node2_1 | 2022-03-18 14:33:25,547|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 lost connection to primary of master
node2_1 | 2022-03-18 14:33:25,548|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:33:25,548|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node2_1 | 2022-03-18 14:33:25,548|INFO|replica.py|Node2:1 setting primaryName for view no 0 to: Node2:1
node2_1 | 2022-03-18 14:33:25,548|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node2_1 | 2022-03-18 14:33:25,549|INFO|node.py|total plugins loaded in node: 0
node2_1 | 2022-03-18 14:33:25,551|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f80975b8588> initialized state for ledger 0: state root DL6ciHhstqGaySZY9cgWcvv6BB4bCzLCviNygF22gNZo
node2_1 | 2022-03-18 14:33:25,551|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f80975b8588> found state to be empty, recreating from ledger 2
node2_1 | 2022-03-18 14:33:25,552|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f80975b8588> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node2_1 | 2022-03-18 14:33:25,552|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f80975b8588> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node2_1 | 2022-03-18 14:33:25,552|INFO|motor.py|Node2 changing status from stopped to starting
node2_1 | 2022-03-18 14:33:25,554|INFO|zstack.py|Node2 will bind its listener at 0.0.0.0:9703
node2_1 | 2022-03-18 14:33:25,554|INFO|stacks.py|CONNECTION: Node2 listening for other nodes at 0.0.0.0:9703
node2_1 | 2022-03-18 14:33:25,555|INFO|zstack.py|Node2C will bind its listener at 0.0.0.0:9704
node2_1 | 2022-03-18 14:33:25,570|INFO|node.py|Node2 first time running...
node2_1 | 2022-03-18 14:33:25,571|INFO|node.py|Node2 processed 0 Ordered batches for instance 0 before starting catch up
node2_1 | 2022-03-18 14:33:25,571|INFO|node.py|Node2 processed 0 Ordered batches for instance 1 before starting catch up
node2_1 | 2022-03-18 14:33:25,571|INFO|ordering_service.py|Node2:0 reverted 0 batches before starting catch up
node2_1 | 2022-03-18 14:33:25,572|INFO|node_leecher_service.py|Node2:NodeLeecherService starting catchup (is_initial=True)
node2_1 | 2022-03-18 14:33:25,572|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from Idle to PreSyncingPool
node2_1 | 2022-03-18 14:33:25,572|INFO|cons_proof_service.py|Node2:ConsProofService:0 starts
node2_1 | 2022-03-18 14:33:25,575|INFO|kit_zstack.py|CONNECTION: Node2 found the following missing connections: Node1, Node4, Node3
node2_1 | 2022-03-18 14:33:25,575|INFO|zstack.py|CONNECTION: Node2 looking for Node1 at 172.21.245.197:9701
node2_1 | 2022-03-18 14:33:25,577|INFO|zstack.py|CONNECTION: Node2 looking for Node4 at 172.21.245.197:9707
node2_1 | 2022-03-18 14:33:25,578|INFO|zstack.py|CONNECTION: Node2 looking for Node3 at 172.21.245.197:9705
node1_1 | 2022-03-18 14:33:40,461|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:33:40,461|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:33:40,537|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:33:40,576|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:33:55,469|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:33:55,476|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:33:55,541|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:33:55,592|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:34:10,474|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:34:10,477|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:34:10,557|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:34:10,596|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
webserver_1 | 2022-03-18 14:34:24,526|WARNING|libindy.py|_indy_loop_callback: Function returned error
webserver_1 | 2022-03-18 14:34:24,534|ERROR|anchor.py|Initialization error:
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/home/indy/server/anchor.py", line 221, in _open_pool
webserver_1 | self._pool = await pool.open_pool_ledger(pool_name, json.dumps(pool_cfg))
webserver_1 | File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/indy/pool.py", line 88, in open_pool_ledger
webserver_1 | open_pool_ledger.cb)
webserver_1 | indy.error.PoolLedgerTimeout
webserver_1 |
webserver_1 | The above exception was the direct cause of the following exception:
webserver_1 |
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/home/indy/server/anchor.py", line 317, in open
webserver_1 | await self._open_pool()
webserver_1 | File "/home/indy/server/anchor.py", line 223, in _open_pool
webserver_1 | raise AnchorException("Error opening pool ledger connection") from e
webserver_1 | server.anchor.AnchorException: Error opening pool ledger connection
webserver_1 | 2022-03-18 14:34:24,537|INFO|server.py|--- Trust anchor initialized ---
node4_1 | 2022-03-18 14:34:25,400|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2022-03-18 14:34:25,401|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2022-03-18 14:34:25,401|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:34:25,411|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2022-03-18 14:34:25,412|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2022-03-18 14:34:25,413|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node1_1 | 2022-03-18 14:34:25,488|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:34:25,493|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:34:25,497|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node3_1 | 2022-03-18 14:34:25,498|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2022-03-18 14:34:25,499|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:34:25,500|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node3_1 | 2022-03-18 14:34:25,501|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2022-03-18 14:34:25,502|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:34:25,548|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2022-03-18 14:34:25,548|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2022-03-18 14:34:25,549|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:34:25,549|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2022-03-18 14:34:25,549|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2022-03-18 14:34:25,550|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:34:25,571|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:34:25,604|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:34:40,495|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:34:40,508|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:34:40,572|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:34:40,616|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:34:55,498|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:34:55,517|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:34:55,574|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:34:55,630|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:35:10,500|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:35:10,530|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:35:10,583|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:10,645|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:35:25,402|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2022-03-18 14:35:25,403|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2022-03-18 14:35:25,403|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:35:25,417|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2022-03-18 14:35:25,417|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2022-03-18 14:35:25,417|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node1_1 | 2022-03-18 14:35:25,507|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:35:25,509|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node3_1 | 2022-03-18 14:35:25,509|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2022-03-18 14:35:25,509|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:35:25,509|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node3_1 | 2022-03-18 14:35:25,510|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2022-03-18 14:35:25,510|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node4_1 | 2022-03-18 14:35:25,535|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:25,552|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2022-03-18 14:35:25,553|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2022-03-18 14:35:25,553|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2022-03-18 14:35:25,553|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2022-03-18 14:35:25,554|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2022-03-18 14:35:25,554|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2022-03-18 14:35:25,590|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:25,653|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:35:40,510|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:35:40,547|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:35:40,593|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:40,659|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:35:55,515|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:35:55,555|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:35:55,599|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:35:55,674|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2022-03-18 14:36:10,526|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2022-03-18 14:36:10,560|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2022-03-18 14:36:10,605|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2022-03-18 14:36:10,689|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
...
// Nodes failed to connect to each other.

Has someone encountered this before?

@WadeBarnes
Copy link
Member

Does a restart (./manage stop and ./manage start) fix the issue? I'll have a closer look at the logs when I have a bit more time.

@WadeBarnes
Copy link
Member

Does the issue consistently appear when switching from Ubuntu to Windows and then back to Ubuntu again?
Are the docker volumes somehow being shared between the Ubuntu and Windows instance in your dual boot setup?

On the surface it looks like a networking issue. The nodes can't even establish an initial connection.

@vpapanchev
Copy link
Author

Hello @WadeBarnes,
When the issue occurs, a restart does not fix it.
The issue does not consistently appear when switching between Ubuntu and Windows. Actually I don't think that it has anything to do with the switching between the OSes. The Docker containers on the Ubuntu are completely independent from the Windows OS.

I also think it is a networking issue. I am in London this week and am using public hotel networks. Since, I am in London, I have not been able to reproduce the issue even after multiple turning On and Off and switching between the two OSes.

Before coming to London, I had been working in the university using the University's Intranet. I think the problem might be connected with the IP address of the Dockerhost and its availability. Once I go back to the university (next week), I will create a new VON Network using the University's Network, then switch to a public network and see whether the issue will appear again.

@WadeBarnes
Copy link
Member

@vpapanchev, I would suspect the networking configuration and rules on the University's network are likely the issue. They could be blocking the Indy port range 9700-9799.

@vpapanchev
Copy link
Author

vpapanchev commented Apr 4, 2022

Hi,
Sorry for taking so long but here is what I have found out:

So, here is what I have done to reproduce the error with some additional information:

  1. Connect to the University LAN using an Ethernet Connection
  2. Create and start a fresh VON Network using ./manage build and ./manage start
  3. Run ./manage dockerhost to get the IP address of the Docker Host:
    DockerHost: 172.18.0.1
  4. Stop the VON network without removing the containers: ./manage stop
  5. Disconnect the Ethernet connection and connect to the University WLAN
  6. Run ./manage start --logs -> Nodes cannot connect to each other.
  7. Run ./manage dockerhost again. Received output DockerHost: 172.18.0.1
  8. Run ./manage stop to stop the network again
  9. Connect back to the University LAN using Ethernet
  10. ./manage start -> Nodes connect successfully

The IP address of the Dockerhost changes when switching between WLAN and LAN. This does make sense and is normal behavior. The command ./manage dockerhost outputs the correct IP address of the current Dockerhost. When inspecting the logs (on step 6) the nodes try to connect to each other using the previous address of the Dockerhost:
node1_1 | 2022-04-04 09:56:50,490|INFO|zstack.py|CONNECTION: Node1 looking for Node4 at 172.21.245.197:9707
I suppose that the nodes store the IP address of the Dockerhost when created. If this is the case, then the issue can be fixed by updating the nodes to always use the current Dockerhost IP address.

Cheers

PS: I don't currently have access to test with another Ethernet connection but I think that the issue should be independent on the type of LAN as long as the IP address of the Dockerhost changes.

@WadeBarnes
Copy link
Member

It's the genesis file that is the issue in this case. It's generated once on startup, if it does not exist, for each node and the webserver.
Refer to:

In your case the genesis file is getting created for one or the other network and then when you switch, the nodes will continue to use the same genesis file to try to connect with each other.

@vpapanchev
Copy link
Author

Yes, and also it makes sense that the IP address of the Dockerhost, i.e., the addresses on which the nodes are accessible, does not change so that nodes and other services can connect to the network.
Thank you @WadeBarnes. I am closing the issue.

@PenguinTaro
Copy link

PenguinTaro commented Oct 18, 2023

Hi @WadeBarnes , can you suggest the solution for the same. MY application was running well for 5 months but unfortunately my docker containers stop responding last week and hence my application servers got down. When I restart the von_network , i can't establish the open pool connection and my nodes aren't communicating with each other.

Adding my error logs:

`
Attaching to von_node4_1, von_node3_1, von_webserver_1, von_node1_1, von_node2_1
node1_1 | Ledger does not exist - Creating...
node1_1 | von_generate_transactions -i 3.110.18.225 -n 1
node1_1 |
node1_1 |
node1_1 | ================================================================================================
node1_1 | Generating genesis transaction file:
node1_1 | nodeArg: --nodeNum 1
node1_1 | ipAddresses: 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
node1_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis
node1_1 | ------------------------------------------------------------------------------------------------
node1_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 1 --ips 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
node1_1 |
node1_1 | Generating keys for provided seed b'000000000000000000000000000Node1'
node1_1 | Init local keys for client-stack
node1_1 | Public key is HXrfcFWDjWusENBoXhV8mARzq51f1npWYWaA1GzfeMDG
node1_1 | Verification key is Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv
node1_1 | Init local keys for node-stack
node1_1 | Public key is HXrfcFWDjWusENBoXhV8mARzq51f1npWYWaA1GzfeMDG
node1_1 | Verification key is Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv
node1_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
node1_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
node1_1 | Nodes will not run locally, so writing /etc/indy/indy.env
node1_1 | This node with name Node1 will use ports 9701 and 9702 for nodestack and clientstack respectively
node1_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
node1_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node1_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node1_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node1_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node1_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node1_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
node1_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
node1_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
node1_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node1_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node1_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node1_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node1_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node1_1 |
node1_1 | ------------------------------------------------------------------------------------------------
node1_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis
node1_1 |
node1_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"3.110.18.225","client_port":9702,"node_ip":"3.110.18.225","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"}
node1_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"3.110.18.225","client_port":9704,"node_ip":"3.110.18.225","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"}
node1_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"3.110.18.225","client_port":9706,"node_ip":"3.110.18.225","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"}
node1_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"3.110.18.225","client_port":9708,"node_ip":"3.110.18.225","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"}
node1_1 | ================================================================================================
node1_1 |
node1_1 | start_indy_node Node1 0.0.0.0 9701 0.0.0.0 9702
node1_1 | 2023-10-18 09:53:43,954|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2023-10-18 09:53:44,077|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2023-10-18 09:53:44,160|INFO|looper.py|Starting up indy-node
node1_1 | 2023-10-18 09:53:44,229|INFO|ledger.py|Starting ledger...
node1_1 | 2023-10-18 09:53:44,286|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2023-10-18 09:53:44,381|INFO|ledger.py|Recovered tree in 0.09487673826515675 seconds
node1_1 | 2023-10-18 09:53:44,458|INFO|ledger.py|Starting ledger...
node1_1 | 2023-10-18 09:53:44,488|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2023-10-18 09:53:44,535|INFO|ledger.py|Recovered tree in 0.04782364144921303 seconds
node1_1 | 2023-10-18 09:53:44,610|INFO|ledger.py|Starting ledger...
node1_1 | 2023-10-18 09:53:44,629|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2023-10-18 09:53:44,669|INFO|ledger.py|Recovered tree in 0.03970630466938019 seconds
node1_1 | 2023-10-18 09:53:44,724|INFO|ledger.py|Starting ledger...
node1_1 | 2023-10-18 09:53:44,742|INFO|ledger.py|Recovering tree from transaction log
node1_1 | 2023-10-18 09:53:44,778|INFO|ledger.py|Recovered tree in 0.03600406274199486 seconds
node1_1 | 2023-10-18 09:53:44,902|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node1
node1_1 | 2023-10-18 09:53:44,903|INFO|pool_manager.py|Node1 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node1_1 | 2023-10-18 09:53:44,904|INFO|pool_manager.py|Node1 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node1_1 | 2023-10-18 09:53:44,905|INFO|pool_manager.py|Node1 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node1_1 | 2023-10-18 09:53:44,905|INFO|pool_manager.py|Node1 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node1_1 | 2023-10-18 09:53:44,919|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2023-10-18 09:53:44,924|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node1_1 | 2023-10-18 09:53:44,928|INFO|stacks.py|Node1C: clients connections tracking is enabled.
node1_1 | 2023-10-18 09:53:44,928|INFO|stacks.py|Node1C: client stack restart is enabled.
node1_1 | 2023-10-18 09:53:44,985|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node1_1 | 2023-10-18 09:53:44,985|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node1_1 | 2023-10-18 09:53:44,985|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node1_1 | 2023-10-18 09:53:44,985|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node1_1 | 2023-10-18 09:53:44,985|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node1_1 | 2023-10-18 09:53:44,985|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node1_1 | 2023-10-18 09:53:44,986|INFO|node.py|Node1 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node3', 'Node1', 'Node4', 'Node2'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node1_1 | 2023-10-18 09:53:44,987|INFO|consensus_shared_data.py|Node1:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node1_1 | 2023-10-18 09:53:44,992|INFO|primary_connection_monitor_service.py|Node1:0 scheduling primary connection check in 60 sec
node1_1 | 2023-10-18 09:53:44,992|NOTIFICATION|replicas.py|Node1 added replica Node1:0 to instance 0 (master)
node1_1 | 2023-10-18 09:53:44,993|INFO|replicas.py|reset monitor due to replica addition
node1_1 | 2023-10-18 09:53:44,993|INFO|consensus_shared_data.py|Node1:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node1_1 | 2023-10-18 09:53:44,995|NOTIFICATION|replicas.py|Node1 added replica Node1:1 to instance 1 (backup)
node1_1 | 2023-10-18 09:53:44,995|INFO|replicas.py|reset monitor due to replica addition
node1_1 | 2023-10-18 09:53:44,996|INFO|consensus_shared_data.py|Node1:0 updated validators list to {'Node3', 'Node1', 'Node4', 'Node2'}
node1_1 | 2023-10-18 09:53:44,996|INFO|consensus_shared_data.py|Node1:1 updated validators list to {'Node3', 'Node1', 'Node4', 'Node2'}
node1_1 | 2023-10-18 09:53:44,996|INFO|replica.py|Node1:0 setting primaryName for view no 0 to: Node1:0
node1_1 | 2023-10-18 09:53:44,996|NOTIFICATION|primary_connection_monitor_service.py|Node1:0 restored connection to primary of master
node1_1 | 2023-10-18 09:53:44,996|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node1_1 | 2023-10-18 09:53:44,996|INFO|replica.py|Node1:1 setting primaryName for view no 0 to: Node2:1
node1_1 | 2023-10-18 09:53:44,997|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node1_1 | 2023-10-18 09:53:44,997|INFO|node.py|total plugins loaded in node: 0
node1_1 | 2023-10-18 09:53:45,007|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7ff2acf0cf98> found state to be empty, recreating from ledger 0
node1_1 | 2023-10-18 09:53:45,011|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7ff2acf0cf98> initialized state for ledger 0: state root G3oQPwPKGpWEgddmVAg3o5iQqy6cocBVBRNSb73RF8C4
node1_1 | 2023-10-18 09:53:45,011|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7ff2acf0cf98> found state to be empty, recreating from ledger 2
node1_1 | 2023-10-18 09:53:45,012|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7ff2acf0cf98> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node1_1 | 2023-10-18 09:53:45,012|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7ff2acf0cf98> found state to be empty, recreating from ledger 1
node1_1 | 2023-10-18 09:53:45,016|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7ff2acf0cf98> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node1_1 | 2023-10-18 09:53:45,016|INFO|motor.py|Node1 changing status from stopped to starting
node1_1 | 2023-10-18 09:53:45,018|INFO|zstack.py|Node1 will bind its listener at 0.0.0.0:9701
node1_1 | 2023-10-18 09:53:45,018|INFO|stacks.py|CONNECTION: Node1 listening for other nodes at 0.0.0.0:9701
node1_1 | 2023-10-18 09:53:45,019|INFO|zstack.py|Node1C will bind its listener at 0.0.0.0:9702
node1_1 | 2023-10-18 09:53:45,052|INFO|node.py|Node1 first time running...
node2_1 | Ledger does not exist - Creating...
node2_1 | von_generate_transactions -i 3.110.18.225 -n 2
node2_1 |
node2_1 |
node2_1 | ================================================================================================
node2_1 | Generating genesis transaction file:
node2_1 | nodeArg: --nodeNum 2
node2_1 | ipAddresses: 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
node2_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis
node2_1 | ------------------------------------------------------------------------------------------------
node2_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 2 --ips 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
node2_1 |
node2_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
node2_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
node2_1 | Generating keys for provided seed b'000000000000000000000000000Node2'
node2_1 | Init local keys for client-stack
node2_1 | Public key is Fsp2dyt7D2B4GA53hKnEmLym5Y75ExGFz2ZBzcQMNKsB
node2_1 | Verification key is 8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb
node2_1 | Init local keys for node-stack
node2_1 | Public key is Fsp2dyt7D2B4GA53hKnEmLym5Y75ExGFz2ZBzcQMNKsB
node2_1 | Verification key is 8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb
node2_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
node2_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node2_1 | Nodes will not run locally, so writing /etc/indy/indy.env
node2_1 | This node with name Node2 will use ports 9703 and 9704 for nodestack and clientstack respectively
node2_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node2_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node2_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node2_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node2_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
node2_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
node2_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
node2_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node2_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node2_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node2_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node2_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node2_1 |
node2_1 | ------------------------------------------------------------------------------------------------
node2_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis
node2_1 |
node2_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"3.110.18.225","client_port":9702,"node_ip":"3.110.18.225","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"}
node2_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"3.110.18.225","client_port":9704,"node_ip":"3.110.18.225","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"}
node2_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"3.110.18.225","client_port":9706,"node_ip":"3.110.18.225","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"}
node1_1 | 2023-10-18 09:53:45,052|INFO|node.py|Node1 processed 0 Ordered batches for instance 0 before starting catch up
node1_1 | 2023-10-18 09:53:45,053|INFO|node.py|Node1 processed 0 Ordered batches for instance 1 before starting catch up
node1_1 | 2023-10-18 09:53:45,053|INFO|ordering_service.py|Node1:0 reverted 0 batches before starting catch up
node1_1 | 2023-10-18 09:53:45,053|INFO|node_leecher_service.py|Node1:NodeLeecherService starting catchup (is_initial=True)
node1_1 | 2023-10-18 09:53:45,053|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from Idle to PreSyncingPool
node1_1 | 2023-10-18 09:53:45,053|INFO|cons_proof_service.py|Node1:ConsProofService:0 starts
node1_1 | 2023-10-18 09:53:45,056|INFO|kit_zstack.py|CONNECTION: Node1 found the following missing connections: Node3, Node2, Node4
node1_1 | 2023-10-18 09:53:45,057|INFO|zstack.py|CONNECTION: Node1 looking for Node3 at 3.110.18.225:9705
node1_1 | 2023-10-18 09:53:45,060|INFO|zstack.py|CONNECTION: Node1 looking for Node2 at 3.110.18.225:9703
node1_1 | 2023-10-18 09:53:45,061|INFO|zstack.py|CONNECTION: Node1 looking for Node4 at 3.110.18.225:9707
node1_1 | 2023-10-18 09:54:00,061|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:54:15,069|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:54:30,079|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:54:45,089|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:55:00,100|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:55:15,106|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:55:30,109|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:55:45,114|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:56:00,122|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:56:15,126|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:56:30,135|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:56:45,145|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 09:57:00,156|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | Ledger does not exist - Creating...
node4_1 | von_generate_transactions -i 3.110.18.225 -n 4
node4_1 |
node4_1 |
node4_1 | ================================================================================================
node4_1 | Generating genesis transaction file:
node4_1 | nodeArg: --nodeNum 4
node4_1 | ipAddresses: 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
node4_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis
node4_1 | ------------------------------------------------------------------------------------------------
node4_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 4 --ips 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
node4_1 |
node4_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
node4_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
node4_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
node4_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node4_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node1_1 | 2023-10-18 10:05:15,423|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:05:30,433|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:05:45,441|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:06:00,451|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:06:15,458|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:06:30,460|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:06:45,466|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:07:00,479|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:07:15,481|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:07:30,487|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"3.110.18.225","client_port":9708,"node_ip":"3.110.18.225","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"}
node2_1 | ================================================================================================
node2_1 |
node2_1 | start_indy_node Node2 0.0.0.0 9703 0.0.0.0 9704
node2_1 | 2023-10-18 09:53:40,752|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node2_1 | 2023-10-18 09:53:40,838|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node2_1 | 2023-10-18 09:53:40,893|INFO|looper.py|Starting up indy-node
node2_1 | 2023-10-18 09:53:40,941|INFO|ledger.py|Starting ledger...
node2_1 | 2023-10-18 09:53:40,960|INFO|ledger.py|Recovering tree from transaction log
node2_1 | 2023-10-18 09:53:41,006|INFO|ledger.py|Recovered tree in 0.046080200001597404 seconds
node2_1 | 2023-10-18 09:53:41,065|INFO|ledger.py|Starting ledger...
node2_1 | 2023-10-18 09:53:41,087|INFO|ledger.py|Recovering tree from transaction log
node2_1 | 2023-10-18 09:53:41,129|INFO|ledger.py|Recovered tree in 0.042096998542547226 seconds
node2_1 | 2023-10-18 09:53:41,195|INFO|ledger.py|Starting ledger...
node2_1 | 2023-10-18 09:53:41,215|INFO|ledger.py|Recovering tree from transaction log
node2_1 | 2023-10-18 09:53:41,253|INFO|ledger.py|Recovered tree in 0.03811372071504593 seconds
node2_1 | 2023-10-18 09:53:41,327|INFO|ledger.py|Starting ledger...
node2_1 | 2023-10-18 09:53:41,347|INFO|ledger.py|Recovering tree from transaction log
node2_1 | 2023-10-18 09:53:41,395|INFO|ledger.py|Recovered tree in 0.048358749598264694 seconds
node2_1 | 2023-10-18 09:53:41,577|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node2
node2_1 | 2023-10-18 09:53:41,577|INFO|pool_manager.py|Node2 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node2_1 | 2023-10-18 09:53:41,578|INFO|pool_manager.py|Node2 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node2_1 | 2023-10-18 09:53:41,578|INFO|pool_manager.py|Node2 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node2_1 | 2023-10-18 09:53:41,578|INFO|pool_manager.py|Node2 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node2_1 | 2023-10-18 09:53:41,583|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node2_1 | 2023-10-18 09:53:41,585|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node2_1 | 2023-10-18 09:53:41,588|INFO|stacks.py|Node2C: clients connections tracking is enabled.
node2_1 | 2023-10-18 09:53:41,588|INFO|stacks.py|Node2C: client stack restart is enabled.
node2_1 | 2023-10-18 09:53:41,639|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node2_1 | 2023-10-18 09:53:41,641|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node2_1 | 2023-10-18 09:53:41,642|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node2_1 | 2023-10-18 09:53:41,642|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node2_1 | 2023-10-18 09:53:41,642|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node2_1 | 2023-10-18 09:53:41,643|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node2_1 | 2023-10-18 09:53:41,646|INFO|node.py|Node2 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node3', 'Node2', 'Node1', 'Node4'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node2_1 | 2023-10-18 09:53:41,649|INFO|consensus_shared_data.py|Node2:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node2_1 | 2023-10-18 09:53:41,654|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2023-10-18 09:53:41,656|NOTIFICATION|replicas.py|Node2 added replica Node2:0 to instance 0 (master)
node2_1 | 2023-10-18 09:53:41,657|INFO|replicas.py|reset monitor due to replica addition
node2_1 | 2023-10-18 09:53:41,658|INFO|consensus_shared_data.py|Node2:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node2_1 | 2023-10-18 09:53:41,660|NOTIFICATION|replicas.py|Node2 added replica Node2:1 to instance 1 (backup)
node2_1 | 2023-10-18 09:53:41,661|INFO|replicas.py|reset monitor due to replica addition
node2_1 | 2023-10-18 09:53:41,661|INFO|consensus_shared_data.py|Node2:0 updated validators list to {'Node3', 'Node2', 'Node1', 'Node4'}
node2_1 | 2023-10-18 09:53:41,661|INFO|consensus_shared_data.py|Node2:1 updated validators list to {'Node3', 'Node2', 'Node1', 'Node4'}
node2_1 | 2023-10-18 09:53:41,662|INFO|replica.py|Node2:0 setting primaryName for view no 0 to: Node1:0
node2_1 | 2023-10-18 09:53:41,662|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 lost connection to primary of master
node2_1 | 2023-10-18 09:53:41,662|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2023-10-18 09:53:41,663|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node2_1 | 2023-10-18 09:53:41,663|INFO|replica.py|Node2:1 setting primaryName for view no 0 to: Node2:1
node2_1 | 2023-10-18 09:53:41,664|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node2_1 | 2023-10-18 09:53:41,664|INFO|node.py|total plugins loaded in node: 0
node2_1 | 2023-10-18 09:53:41,666|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f1391243fd0> found state to be empty, recreating from ledger 0
node2_1 | 2023-10-18 09:53:41,671|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f1391243fd0> initialized state for ledger 0: state root G3oQPwPKGpWEgddmVAg3o5iQqy6cocBVBRNSb73RF8C4
node2_1 | 2023-10-18 09:53:41,671|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f1391243fd0> found state to be empty, recreating from ledger 2
node2_1 | 2023-10-18 09:53:41,672|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f1391243fd0> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node2_1 | 2023-10-18 09:53:41,673|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f1391243fd0> found state to be empty, recreating from ledger 1
node2_1 | 2023-10-18 09:53:41,681|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f1391243fd0> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node2_1 | 2023-10-18 09:53:41,681|INFO|motor.py|Node2 changing status from stopped to starting
node2_1 | 2023-10-18 09:53:41,684|INFO|zstack.py|Node2 will bind its listener at 0.0.0.0:9703
node2_1 | 2023-10-18 09:53:41,684|INFO|stacks.py|CONNECTION: Node2 listening for other nodes at 0.0.0.0:9703
node4_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node4_1 | Generating keys for provided seed b'000000000000000000000000000Node4'
node4_1 | Init local keys for client-stack
node4_1 | Public key is ECUd5UfoYa2yUBkmxNkMbkfGKcZ8Voh5Mi3JzBwWEDpm
node4_1 | Verification key is 4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA
node4_1 | Init local keys for node-stack
node4_1 | Public key is ECUd5UfoYa2yUBkmxNkMbkfGKcZ8Voh5Mi3JzBwWEDpm
node4_1 | Verification key is 4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA
node4_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node4_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node4_1 | Nodes will not run locally, so writing /etc/indy/indy.env
node4_1 | This node with name Node4 will use ports 9707 and 9708 for nodestack and clientstack respectively
node4_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
node4_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
node4_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
node4_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node4_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node4_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node4_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node4_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node4_1 |
node4_1 | ------------------------------------------------------------------------------------------------
node4_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis
node4_1 |
node4_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"3.110.18.225","client_port":9702,"node_ip":"3.110.18.225","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"}
node4_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"3.110.18.225","client_port":9704,"node_ip":"3.110.18.225","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"}
node4_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"3.110.18.225","client_port":9706,"node_ip":"3.110.18.225","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"}
node1_1 | 2023-10-18 10:07:45,494|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:08:00,504|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:08:15,504|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:08:30,515|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:08:45,518|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:09:00,524|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:09:15,528|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:09:30,531|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:09:45,539|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:10:00,546|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:10:15,553|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:10:30,557|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:10:45,561|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:11:00,566|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:11:15,585|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:11:30,603|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:11:45,609|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:12:00,612|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"3.110.18.225","client_port":9708,"node_ip":"3.110.18.225","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"}
node4_1 | ================================================================================================
node4_1 |
node4_1 | start_indy_node Node4 0.0.0.0 9707 0.0.0.0 9708
node4_1 | 2023-10-18 09:53:43,474|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2023-10-18 09:53:43,574|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2023-10-18 09:53:43,635|INFO|looper.py|Starting up indy-node
node4_1 | 2023-10-18 09:53:43,698|INFO|ledger.py|Starting ledger...
node4_1 | 2023-10-18 09:53:43,715|INFO|ledger.py|Recovering tree from transaction log
node4_1 | 2023-10-18 09:53:43,748|INFO|ledger.py|Recovered tree in 0.0329501386731863 seconds
node4_1 | 2023-10-18 09:53:43,798|INFO|ledger.py|Starting ledger...
node4_1 | 2023-10-18 09:53:43,816|INFO|ledger.py|Recovering tree from transaction log
node4_1 | 2023-10-18 09:53:43,850|INFO|ledger.py|Recovered tree in 0.03357454389333725 seconds
node4_1 | 2023-10-18 09:53:43,916|INFO|ledger.py|Starting ledger...
node4_1 | 2023-10-18 09:53:43,965|INFO|ledger.py|Recovering tree from transaction log
node4_1 | 2023-10-18 09:53:43,998|INFO|ledger.py|Recovered tree in 0.03333096578717232 seconds
node4_1 | 2023-10-18 09:53:44,052|INFO|ledger.py|Starting ledger...
node4_1 | 2023-10-18 09:53:44,068|INFO|ledger.py|Recovering tree from transaction log
node4_1 | 2023-10-18 09:53:44,105|INFO|ledger.py|Recovered tree in 0.03686473146080971 seconds
node4_1 | 2023-10-18 09:53:44,251|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node4
node4_1 | 2023-10-18 09:53:44,252|INFO|pool_manager.py|Node4 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node4_1 | 2023-10-18 09:53:44,252|INFO|pool_manager.py|Node4 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node4_1 | 2023-10-18 09:53:44,256|INFO|pool_manager.py|Node4 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node4_1 | 2023-10-18 09:53:44,256|INFO|pool_manager.py|Node4 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node4_1 | 2023-10-18 09:53:44,266|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2023-10-18 09:53:44,270|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node4_1 | 2023-10-18 09:53:44,284|INFO|stacks.py|Node4C: clients connections tracking is enabled.
node4_1 | 2023-10-18 09:53:44,284|INFO|stacks.py|Node4C: client stack restart is enabled.
node4_1 | 2023-10-18 09:53:44,347|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node4_1 | 2023-10-18 09:53:44,348|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node4_1 | 2023-10-18 09:53:44,348|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node4_1 | 2023-10-18 09:53:44,348|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node4_1 | 2023-10-18 09:53:44,348|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node4_1 | 2023-10-18 09:53:44,348|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node4_1 | 2023-10-18 09:53:44,349|INFO|node.py|Node4 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node2', 'Node4', 'Node1', 'Node3'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node4_1 | 2023-10-18 09:53:44,350|INFO|consensus_shared_data.py|Node4:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2023-10-18 09:53:44,353|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2023-10-18 09:53:44,353|NOTIFICATION|replicas.py|Node4 added replica Node4:0 to instance 0 (master)
node4_1 | 2023-10-18 09:53:44,353|INFO|replicas.py|reset monitor due to replica addition
node4_1 | 2023-10-18 09:53:44,354|INFO|consensus_shared_data.py|Node4:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2023-10-18 09:53:44,356|NOTIFICATION|replicas.py|Node4 added replica Node4:1 to instance 1 (backup)
node4_1 | 2023-10-18 09:53:44,357|INFO|replicas.py|reset monitor due to replica addition
node4_1 | 2023-10-18 09:53:44,357|INFO|consensus_shared_data.py|Node4:0 updated validators list to {'Node2', 'Node4', 'Node1', 'Node3'}
node4_1 | 2023-10-18 09:53:44,358|INFO|consensus_shared_data.py|Node4:1 updated validators list to {'Node2', 'Node4', 'Node1', 'Node3'}
node4_1 | 2023-10-18 09:53:44,358|INFO|replica.py|Node4:0 setting primaryName for view no 0 to: Node1:0
node4_1 | 2023-10-18 09:53:44,358|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 lost connection to primary of master
node4_1 | 2023-10-18 09:53:44,358|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2023-10-18 09:53:44,360|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
node4_1 | 2023-10-18 09:53:44,360|INFO|replica.py|Node4:1 setting primaryName for view no 0 to: Node2:1
node4_1 | 2023-10-18 09:53:44,360|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
node4_1 | 2023-10-18 09:53:44,361|INFO|node.py|total plugins loaded in node: 0
node4_1 | 2023-10-18 09:53:44,364|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f39e9eabf98> found state to be empty, recreating from ledger 0
node4_1 | 2023-10-18 09:53:44,375|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f39e9eabf98> initialized state for ledger 0: state root G3oQPwPKGpWEgddmVAg3o5iQqy6cocBVBRNSb73RF8C4
node4_1 | 2023-10-18 09:53:44,375|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f39e9eabf98> found state to be empty, recreating from ledger 2
node4_1 | 2023-10-18 09:53:44,376|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f39e9eabf98> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
node4_1 | 2023-10-18 09:53:44,376|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f39e9eabf98> found state to be empty, recreating from ledger 1
node4_1 | 2023-10-18 09:53:44,381|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f39e9eabf98> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
node4_1 | 2023-10-18 09:53:44,381|INFO|motor.py|Node4 changing status from stopped to starting
node4_1 | 2023-10-18 09:53:44,384|INFO|zstack.py|Node4 will bind its listener at 0.0.0.0:9707
node1_1 | 2023-10-18 10:12:15,620|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:12:30,621|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:12:45,628|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:13:00,635|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:13:15,648|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node1_1 | 2023-10-18 10:13:30,660|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 09:53:41,686|INFO|zstack.py|Node2C will bind its listener at 0.0.0.0:9704
node2_1 | 2023-10-18 09:53:41,768|INFO|node.py|Node2 first time running...
node2_1 | 2023-10-18 09:53:41,769|INFO|node.py|Node2 processed 0 Ordered batches for instance 0 before starting catch up
node2_1 | 2023-10-18 09:53:41,769|INFO|node.py|Node2 processed 0 Ordered batches for instance 1 before starting catch up
node2_1 | 2023-10-18 09:53:41,770|INFO|ordering_service.py|Node2:0 reverted 0 batches before starting catch up
node2_1 | 2023-10-18 09:53:41,770|INFO|node_leecher_service.py|Node2:NodeLeecherService starting catchup (is_initial=True)
node2_1 | 2023-10-18 09:53:41,770|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from Idle to PreSyncingPool
node2_1 | 2023-10-18 09:53:41,771|INFO|cons_proof_service.py|Node2:ConsProofService:0 starts
node2_1 | 2023-10-18 09:53:41,774|INFO|kit_zstack.py|CONNECTION: Node2 found the following missing connections: Node3, Node1, Node4
node2_1 | 2023-10-18 09:53:41,776|INFO|zstack.py|CONNECTION: Node2 looking for Node3 at 3.110.18.225:9705
node2_1 | 2023-10-18 09:53:41,780|INFO|zstack.py|CONNECTION: Node2 looking for Node1 at 3.110.18.225:9701
node2_1 | 2023-10-18 09:53:41,783|INFO|zstack.py|CONNECTION: Node2 looking for Node4 at 3.110.18.225:9707
node2_1 | 2023-10-18 09:53:56,777|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 09:54:11,786|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 09:54:26,797|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 09:54:41,661|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2023-10-18 09:54:41,661|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2023-10-18 09:54:41,662|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node2_1 | 2023-10-18 09:54:41,675|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2023-10-18 09:54:41,676|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2023-10-18 09:54:41,676|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | Ledger does not exist - Creating...
node3_1 | von_generate_transactions -i 3.110.18.225 -n 3
node3_1 |
node3_1 |
node3_1 | ================================================================================================
node3_1 | Generating genesis transaction file:
node3_1 | nodeArg: --nodeNum 3
node3_1 | ipAddresses: 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
node3_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis
node3_1 | ------------------------------------------------------------------------------------------------
node3_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 3 --ips 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
node3_1 |
node3_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
node3_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
node3_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
node3_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node3_1 | Generating keys for provided seed b'000000000000000000000000000Node3'
node3_1 | Init local keys for client-stack
node3_1 | Public key is 6KTs7Q9Lng5uX6oWCkVifiJ6hSpkdHiRijAsXtAunnGN
node3_1 | Verification key is DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya
node3_1 | Init local keys for node-stack
node3_1 | Public key is 6KTs7Q9Lng5uX6oWCkVifiJ6hSpkdHiRijAsXtAunnGN
node3_1 | Verification key is DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya
node3_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node3_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node3_1 | Nodes will not run locally, so writing /etc/indy/indy.env
node3_1 | This node with name Node3 will use ports 9705 and 9706 for nodestack and clientstack respectively
node3_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node3_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node3_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
node3_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
node3_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
node3_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node3_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node3_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node3_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node3_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node3_1 |
node3_1 | ------------------------------------------------------------------------------------------------
node3_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis
node3_1 |
node3_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"3.110.18.225","client_port":9702,"node_ip":"3.110.18.225","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"}
node3_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"3.110.18.225","client_port":9704,"node_ip":"3.110.18.225","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"}
node3_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"3.110.18.225","client_port":9706,"node_ip":"3.110.18.225","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"}
node3_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"3.110.18.225","client_port":9708,"node_ip":"3.110.18.225","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"}
node3_1 | ================================================================================================
node3_1 |
node3_1 | start_indy_node Node3 0.0.0.0 9705 0.0.0.0 9706
node3_1 | 2023-10-18 09:53:41,084|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2023-10-18 09:53:41,209|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2023-10-18 09:53:41,286|INFO|looper.py|Starting up indy-node
node3_1 | 2023-10-18 09:53:41,357|INFO|ledger.py|Starting ledger...
node3_1 | 2023-10-18 09:53:41,383|INFO|ledger.py|Recovering tree from transaction log
node3_1 | 2023-10-18 09:53:41,432|INFO|ledger.py|Recovered tree in 0.049278443679213524 seconds
node3_1 | 2023-10-18 09:53:41,503|INFO|ledger.py|Starting ledger...
node3_1 | 2023-10-18 09:53:41,528|INFO|ledger.py|Recovering tree from transaction log
node3_1 | 2023-10-18 09:53:41,595|INFO|ledger.py|Recovered tree in 0.0669380221515894 seconds
node3_1 | 2023-10-18 09:53:41,677|INFO|ledger.py|Starting ledger...
node3_1 | 2023-10-18 09:53:41,697|INFO|ledger.py|Recovering tree from transaction log
node3_1 | 2023-10-18 09:53:41,743|INFO|ledger.py|Recovered tree in 0.045882727950811386 seconds
node3_1 | 2023-10-18 09:53:41,850|INFO|ledger.py|Starting ledger...
node3_1 | 2023-10-18 09:53:41,865|INFO|ledger.py|Recovering tree from transaction log
node3_1 | 2023-10-18 09:53:41,905|INFO|ledger.py|Recovered tree in 0.03967468813061714 seconds
node3_1 | 2023-10-18 09:53:42,031|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node3
node3_1 | 2023-10-18 09:53:42,032|INFO|pool_manager.py|Node3 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5
node3_1 | 2023-10-18 09:53:42,032|INFO|pool_manager.py|Node3 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5
node3_1 | 2023-10-18 09:53:42,033|INFO|pool_manager.py|Node3 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5
node3_1 | 2023-10-18 09:53:42,033|INFO|pool_manager.py|Node3 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5
node3_1 | 2023-10-18 09:53:42,042|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2023-10-18 09:53:42,044|INFO|notifier_plugin_manager.py|Found notifier plugins: []
node3_1 | 2023-10-18 09:53:42,049|INFO|stacks.py|Node3C: clients connections tracking is enabled.
node3_1 | 2023-10-18 09:53:42,049|INFO|stacks.py|Node3C: client stack restart is enabled.
node3_1 | 2023-10-18 09:53:42,138|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute
node3_1 | 2023-10-18 09:53:42,138|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute
node3_1 | 2023-10-18 09:53:42,139|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute
node3_1 | 2023-10-18 09:53:42,139|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute
node3_1 | 2023-10-18 09:53:42,139|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute
node3_1 | 2023-10-18 09:53:42,139|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer
node3_1 | 2023-10-18 09:53:42,140|INFO|node.py|Node3 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node4', 'Node2', 'Node1', 'Node3'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}
node3_1 | 2023-10-18 09:53:42,141|INFO|consensus_shared_data.py|Node3:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node3_1 | 2023-10-18 09:53:42,144|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node3_1 | 2023-10-18 09:53:42,145|NOTIFICATION|replicas.py|Node3 added replica Node3:0 to instance 0 (master)
node4_1 | 2023-10-18 10:04:44,436|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2023-10-18 10:04:44,436|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2023-10-18 10:04:44,436|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2023-10-18 10:04:44,763|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2023-10-18 10:04:59,770|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2023-10-18 10:05:14,781|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2023-10-18 10:05:29,790|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2023-10-18 10:05:44,436|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2023-10-18 10:05:44,436|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2023-10-18 10:05:44,436|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2023-10-18 10:05:44,448|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
node4_1 | 2023-10-18 10:05:44,451|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node4_1 | 2023-10-18 10:05:44,452|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
node4_1 | 2023-10-18 10:05:44,798|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2023-10-18 10:05:59,799|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2023-10-18 10:06:14,807|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2023-10-18 10:06:29,810|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node4_1 | 2023-10-18 10:06:44,444|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long
webserver_1 | Ledger does not exist - Creating genesis data...
node2_1 | 2023-10-18 10:04:41,747|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2023-10-18 09:53:42,146|INFO|consensus_shared_data.py|Node3:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4']
node4_1 | 2023-10-18 10:12:44,503|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec
webserver_1 | von_generate_transactions -i 3.110.18.225
node2_1 | 2023-10-18 10:04:41,747|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2023-10-18 09:53:42,149|NOTIFICATION|replicas.py|Node3 added replica Node3:1 to instance 1 (backup)
node4_1 | 2023-10-18 10:12:45,012|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
webserver_1 |
node2_1 | 2023-10-18 10:04:42,058|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,149|INFO|replicas.py|reset monitor due to replica addition
node4_1 | 2023-10-18 10:13:00,013|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
webserver_1 |
node2_1 | 2023-10-18 10:04:57,071|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,149|INFO|consensus_shared_data.py|Node3:0 updated validators list to {'Node4', 'Node2', 'Node1', 'Node3'}
node4_1 | 2023-10-18 10:13:15,020|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
webserver_1 | ================================================================================================
node2_1 | 2023-10-18 10:05:12,072|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,150|INFO|consensus_shared_data.py|Node3:1 updated validators list to {'Node4', 'Node2', 'Node1', 'Node3'}
node4_1 | 2023-10-18 10:13:30,031|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 10:05:27,080|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,150|INFO|replica.py|Node3:0 setting primaryName for view no 0 to: Node1:0
webserver_1 | Generating genesis transaction file:
webserver_1 | nodeArg:
webserver_1 | ipAddresses: 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
webserver_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis
webserver_1 | ------------------------------------------------------------------------------------------------
webserver_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --ips 3.110.18.225,3.110.18.225,3.110.18.225,3.110.18.225
webserver_1 |
webserver_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba
webserver_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1
webserver_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk
webserver_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5
node3_1 | 2023-10-18 09:53:42,150|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 lost connection to primary of master
webserver_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5
node2_1 | 2023-10-18 10:05:41,742|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node3_1 | 2023-10-18 09:53:42,150|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
webserver_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh
node2_1 | 2023-10-18 10:05:41,743|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2023-10-18 09:53:42,150|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0)
webserver_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw
node2_1 | 2023-10-18 10:05:41,743|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2023-10-18 09:53:42,150|INFO|replica.py|Node3:1 setting primaryName for view no 0 to: Node2:1
webserver_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP
node2_1 | 2023-10-18 10:05:41,754|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node3_1 | 2023-10-18 09:53:42,151|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0)
webserver_1 |
node2_1 | 2023-10-18 10:05:41,755|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2023-10-18 10:05:41,755|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
webserver_1 | ------------------------------------------------------------------------------------------------
node2_1 | 2023-10-18 10:05:42,081|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,151|INFO|node.py|total plugins loaded in node: 0
webserver_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis
webserver_1 |
node3_1 | 2023-10-18 09:53:42,154|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f41c7f0af98> found state to be empty, recreating from ledger 0
webserver_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"3.110.18.225","client_port":9702,"node_ip":"3.110.18.225","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"}
node2_1 | 2023-10-18 10:05:57,088|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,159|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f41c7f0af98> initialized state for ledger 0: state root G3oQPwPKGpWEgddmVAg3o5iQqy6cocBVBRNSb73RF8C4
webserver_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"3.110.18.225","client_port":9704,"node_ip":"3.110.18.225","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"}
node2_1 | 2023-10-18 10:06:12,098|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,159|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f41c7f0af98> found state to be empty, recreating from ledger 2
webserver_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"3.110.18.225","client_port":9706,"node_ip":"3.110.18.225","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"}
node2_1 | 2023-10-18 10:06:27,108|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,159|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f41c7f0af98> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA
webserver_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"3.110.18.225","client_port":9708,"node_ip":"3.110.18.225","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"}
node2_1 | 2023-10-18 10:06:41,744|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node3_1 | 2023-10-18 09:53:42,159|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f41c7f0af98> found state to be empty, recreating from ledger 1
webserver_1 | ================================================================================================
node2_1 | 2023-10-18 10:06:41,744|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2023-10-18 09:53:42,165|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f41c7f0af98> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM
webserver_1 |
node2_1 | 2023-10-18 10:06:41,744|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2023-10-18 09:53:42,165|INFO|motor.py|Node3 changing status from stopped to starting
webserver_1 | 2023-10-18 09:54:05,600|INFO|server.py|REGISTER_NEW_DIDS is set to True
node2_1 | 2023-10-18 10:06:41,756|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node3_1 | 2023-10-18 09:53:42,168|INFO|zstack.py|Node3 will bind its listener at 0.0.0.0:9705
webserver_1 | 2023-10-18 09:54:05,605|INFO|server.py|LEDGER_INSTANCE_NAME is set to "Indy Ledger"
node2_1 | 2023-10-18 10:06:41,757|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
webserver_1 | 2023-10-18 09:54:05,607|INFO|server.py|Web analytics are DISABLED
node2_1 | 2023-10-18 10:06:41,757|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2023-10-18 09:53:42,169|INFO|stacks.py|CONNECTION: Node3 listening for other nodes at 0.0.0.0:9705
webserver_1 | 2023-10-18 09:54:05,610|INFO|server.py|Running webserver...
node2_1 | 2023-10-18 10:06:42,111|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,170|INFO|zstack.py|Node3C will bind its listener at 0.0.0.0:9706
webserver_1 | 2023-10-18 09:54:05,611|DEBUG|selector_events.py|Using selector: EpollSelector
node2_1 | 2023-10-18 10:06:57,115|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,218|INFO|node.py|Node3 first time running...
webserver_1 | 2023-10-18 09:54:05,612|INFO|server.py|Creating trust anchor...
node2_1 | 2023-10-18 10:07:12,117|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,219|INFO|node.py|Node3 processed 0 Ordered batches for instance 0 before starting catch up
webserver_1 | 2023-10-18 09:54:05,613|INFO|anchor.py|Ledger cache will be stored in :memory:
node2_1 | 2023-10-18 10:07:27,127|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,219|INFO|node.py|Node3 processed 0 Ordered batches for instance 1 before starting catch up
webserver_1 | 2023-10-18 09:54:05,616|DEBUG|core.py|executing functools.partial(<function connect..connector at 0x7f30fa50ebf8>)
node2_1 | 2023-10-18 10:07:41,752|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node3_1 | 2023-10-18 09:53:42,219|INFO|ordering_service.py|Node3:0 reverted 0 batches before starting catch up
webserver_1 | 2023-10-18 09:54:05,618|DEBUG|core.py|returning <sqlite3.Connection object at 0x7f310035c490>
node2_1 | 2023-10-18 10:07:41,752|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2023-10-18 09:53:42,219|INFO|node_leecher_service.py|Node3:NodeLeecherService starting catchup (is_initial=True)
webserver_1 | ======== Running on http://0.0.0.0:8000 ========
node2_1 | 2023-10-18 10:07:41,752|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2023-10-18 09:53:42,220|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from Idle to PreSyncingPool
webserver_1 | (Press CTRL+C to quit)
node2_1 | 2023-10-18 10:07:41,765|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node3_1 | 2023-10-18 09:53:42,220|INFO|cons_proof_service.py|Node3:ConsProofService:0 starts
webserver_1 | 2023-10-18 09:54:05,622|INFO|anchor.py|Initializing transaction database
node2_1 | 2023-10-18 10:07:41,765|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node3_1 | 2023-10-18 09:53:42,225|INFO|kit_zstack.py|CONNECTION: Node3 found the following missing connections: Node4, Node2, Node1
webserver_1 | 2023-10-18 09:54:05,622|DEBUG|core.py|executing functools.partial(<built-in method executescript of sqlite3.Connection object at 0x7f310035c490>, '\n CREATE TABLE existent (\n ledger integer PRIMARY KEY,\n seqno integer NOT NULL DEFAULT 0\n );\n CREATE TABLE latest (\n ledger integer PRIMARY KEY,\n seqno integer NOT NULL DEFAULT 0\n );\n CREATE TABLE transactions (\n ledger integer NOT NULL,\n seqno integer NOT NULL,\n txntype integer NOT NULL,\n termsid integer,\n txnid text,\n added timestamp,\n value text,\n PRIMARY KEY (ledger, seqno)\n );\n CREATE INDEX txn_id ON transactions (txnid);\n CREATE VIRTUAL TABLE terms USING\n fts3(txnid, sender, ident, alias, verkey, short_verkey, data);\n ')
node2_1 | 2023-10-18 10:07:41,766|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2023-10-18 09:53:42,226|INFO|zstack.py|CONNECTION: Node3 looking for Node4 at 3.110.18.225:9707
webserver_1 | 2023-10-18 09:54:05,624|DEBUG|core.py|returning <sqlite3.Cursor object at 0x7f30fbd25dc0>
node2_1 | 2023-10-18 10:07:42,139|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,230|INFO|zstack.py|CONNECTION: Node3 looking for Node2 at 3.110.18.225:9703
node2_1 | 2023-10-18 10:07:57,144|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:42,231|INFO|zstack.py|CONNECTION: Node3 looking for Node1 at 3.110.18.225:9701
webserver_1 | 2023-10-18 09:54:05,628|DEBUG|core.py|executing functools.partial(<built-in method close of sqlite3.Cursor object at 0x7f30fbd25dc0>)
node2_1 | 2023-10-18 10:08:12,153|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:53:57,229|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
webserver_1 | 2023-10-18 09:54:05,629|DEBUG|core.py|returning None
node2_1 | 2023-10-18 10:08:27,165|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:54:12,238|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
webserver_1 | 2023-10-18 09:54:05,632|DEBUG|pool.py|set_protocol_version: >>> protocol_version: 2
node2_1 | 2023-10-18 10:08:41,759|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node3_1 | 2023-10-18 09:54:27,250|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
webserver_1 | 2023-10-18 09:54:05,647|DEBUG|pool.py|set_protocol_version: Creating callback
node2_1 | 2023-10-18 10:08:41,759|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2023-10-18 10:08:41,759|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
webserver_1 | 2023-10-18 09:54:05,652|DEBUG|pool.py|set_protocol_version: <<< res: None
webserver_1 | 2023-10-18 09:54:05,653|DEBUG|pool.py|list_pools: >>>
webserver_1 | 2023-10-18 09:54:05,653|DEBUG|pool.py|list_pools: Creating callback
webserver_1 | 2023-10-18 09:54:05,654|DEBUG|pool.py|list_pools: <<< res: []
webserver_1 | 2023-10-18 09:54:05,656|INFO|anchor.py|Genesis file already exists: /home/indy/ledger/sandbox/pool_transactions_genesis
webserver_1 | 2023-10-18 09:54:05,656|DEBUG|pool.py|create_pool_ledger_config: >>> config_name: 'nodepool', config: '{"genesis_txn": "/home/indy/ledger/sandbox/pool_transactions_genesis"}'
webserver_1 | 2023-10-18 09:54:05,656|DEBUG|pool.py|create_pool_ledger_config: Creating callback
webserver_1 | 2023-10-18 09:54:05,658|DEBUG|pool.py|create_pool_ledger_config: <<< res: None
webserver_1 | 2023-10-18 09:54:05,658|DEBUG|pool.py|open_pool_ledger: >>> config_name: 'nodepool', config: '{}'
webserver_1 | 2023-10-18 09:54:05,658|DEBUG|pool.py|open_pool_ledger: Creating callback
webserver_1 | 2023-10-18 09:55:05,678|WARNING|libindy.py|_indy_loop_callback: Function returned error
webserver_1 | 2023-10-18 09:55:05,679|ERROR|anchor.py|Initialization error:
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/home/indy/server/anchor.py", line 221, in _open_pool
webserver_1 | self._pool = await pool.open_pool_ledger(pool_name, json.dumps(pool_cfg))
webserver_1 | File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/indy/pool.py", line 88, in open_pool_ledger
webserver_1 | open_pool_ledger.cb)
node3_1 | 2023-10-18 09:54:42,145|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
webserver_1 | indy.error.PoolLedgerTimeout
node2_1 | 2023-10-18 10:08:41,772|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node2_1 | 2023-10-18 10:08:41,772|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
webserver_1 |
node2_1 | 2023-10-18 10:08:41,772|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
node3_1 | 2023-10-18 09:54:42,146|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
webserver_1 | The above exception was the direct cause of the following exception:
node2_1 | 2023-10-18 10:08:42,168|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:54:42,146|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
webserver_1 |
node2_1 | 2023-10-18 10:08:57,175|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:54:42,158|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
webserver_1 | Traceback (most recent call last):
node2_1 | 2023-10-18 10:09:12,192|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 10:09:27,195|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:54:42,158|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2023-10-18 10:09:41,768|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
webserver_1 | File "/home/indy/server/anchor.py", line 317, in open
node3_1 | 2023-10-18 09:54:42,158|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node2_1 | 2023-10-18 10:09:41,768|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
webserver_1 | await self._open_pool()
node3_1 | 2023-10-18 09:54:42,255|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 10:09:41,768|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
webserver_1 | File "/home/indy/server/anchor.py", line 223, in _open_pool
node3_1 | 2023-10-18 09:54:57,257|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 10:09:41,780|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
webserver_1 | raise AnchorException("Error opening pool ledger connection") from e
node3_1 | 2023-10-18 09:55:12,264|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 10:09:41,780|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
webserver_1 | server.anchor.AnchorException: Error opening pool ledger connection
node3_1 | 2023-10-18 09:55:27,275|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0
node2_1 | 2023-10-18 10:09:41,780|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec
webserver_1 | 2023-10-18 09:55:05,680|INFO|server.py|--- Trust anchor initialized ---
node3_1 | 2023-10-18 09:55:42,150|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node2_1 | 2023-10-18 10:09:42,197|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:55:42,151|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2023-10-18 10:09:57,200|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:55:42,151|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec
node2_1 | 2023-10-18 10:10:12,203|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:55:42,167|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 primary has been disconnected for too long
node2_1 | 2023-10-18 10:10:27,210|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0
node3_1 | 2023-10-18 09:55:42,168|INFO|primary_connection_monitor_service.py|Node3:0 The node is not ready yet so view change will not be proposed now, but re-scheduled.
node2_1 | 2023-10-18 10:10:41,773|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long
node3_1 | 2023-10-18 09:55:42,168|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec


`

@WadeBarnes
Copy link
Member

@PenguinTaro, could you create a new issue for this. The last issue was resolved, and your issue may not have the same root cause.

@PenguinTaro
Copy link

Okay

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants