Skip to content
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.

Mongodb plugin exception while inserting block states #7737

Closed
fufesou opened this issue Aug 8, 2019 · 23 comments
Closed

Mongodb plugin exception while inserting block states #7737

fufesou opened this issue Aug 8, 2019 · 23 comments

Comments

@fufesou
Copy link

fufesou commented Aug 8, 2019

I used BOSCore v3.0.1-rc2, Basic EOSIO Version: v1.6.6.

Why mongodb throws code 4? There are many errors with 4 in mongocxx.
What are the possible reasons? How to avoid it?
Thank you.

'update_blocks_via_block_num' is configured to be false.

error 2019-08-07T17:46:51.370 thread-1  mongo_db_plugin.cpp:544       handle_mongo_excepti ] mongo exception, block_states insert: {...}, line 998, code 4, generic server error
error 2019-08-07T17:46:51.370 thread-1  mongo_db_plugin.cpp:546       handle_mongo_excepti ]   raw_server_error: { "nInserted" : 0, "nMatched" : 0, "nModified" : 0, "nRemoved" : 0, "nUpserted" : 0, "writeErrors" : [  ] }
info  2019-08-07T17:46:51.371 thread-0  net_plugin.cpp:3812           plugin_shutdown      ] shutdown..
info  2019-08-07T17:46:51.371 thread-0  net_plugin.cpp:3815           plugin_shutdown      ] close acceptor
info  2019-08-07T17:46:51.371 thread-0  net_plugin.cpp:3818           plugin_shutdown      ] close 7 connections
info  2019-08-07T17:46:51.371 thread-0  net_plugin.cpp:3826           plugin_shutdown      ] exit shutdown
info  2019-08-07T17:46:51.371 thread-0  mongo_db_plugin.cpp:1349      ~mongo_db_plugin_imp ] mongo_db_plugin shutdown in process please be patient this can take a few minutes
@fufesou fufesou changed the title Mongodb plugin exception while insert block states Mongodb plugin exception while inserting block states Aug 8, 2019
@heifner
Copy link
Contributor

heifner commented Aug 8, 2019

I believe for those types of errors you need to check the mongo log and mongo docs.

@fufesou
Copy link
Author

fufesou commented Aug 8, 2019

I believe for those types of errors you need to check the mongo log and mongo docs.

The log was replaced after restarting mongo today. You mean the possible reason was wrote in the mongo log file?

@heifner
Copy link
Contributor

heifner commented Aug 8, 2019

Yes, the mongo_db_plugin just reports the error it got from mongo, so you need to see what problem mongo had.

@fufesou
Copy link
Author

fufesou commented Aug 8, 2019

Yes, the mongo_db_plugin just reports the error it got from mongo, so you need to see what problem mongo had.

Got it, thank you. The problem happens when the nodeos run without requests and nobody was on work. I'm afraid the problem do not happen again...

@heifner
Copy link
Contributor

heifner commented Aug 9, 2019

Feel free to open a new issue or reopen this one if you discover new info that impacts mongo_db_plugin.

@heifner heifner closed this as completed Aug 9, 2019
@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou @heifner How can you solve this problem.

@fufesou
Copy link
Author

fufesou commented Dec 29, 2019

@fufesou @heifner How can you solve this problem.

It seems nothing can be done when this problem happens. When mongodb shutdown, users should decide whether shutdown nodeos by themselves.

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou So why does this problem happen? I have encountered this problem twice.

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou You can see the issue in #8377

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou @heifner
The error is
image
And i find the error in the source code
image

@fufesou
Copy link
Author

fufesou commented Dec 29, 2019

@fufesou @heifner
The error is
image
And i find the error in the source code
image

You can check the mongodb service log. Find the error msg and fix it if possible.

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou @heifner This is the log I just ran mongod, and the error happened again
2019-12-29T14:57:38.475+0800 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] MongoDB starting : pid=14523 port=27017 dbpath=/mnt/4t3/mongodb_eos/db 64-bit host=kaho
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] db version v4.2.2
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] git version: a0bbbff6ada159e19298d37946ac8dc4b497eadf
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] allocator: tcmalloc
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] modules: none
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] build environment:
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] distmod: ubuntu1604
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] distarch: x86_64
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] target_arch: x86_64
2019-12-29T14:57:38.480+0800 I CONTROL [initandlisten] options: { net: { port: 27017 }, storage: { dbPath: "/mnt/4t3/mongodb_eos/db" }, systemLog: { destination: "file", path: "/mnt/4t3/mongodb_eos/logs/mongo.log" } }
2019-12-29T14:57:38.480+0800 I STORAGE [initandlisten] Detected data files in /mnt/4t3/mongodb_eos/db created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2019-12-29T14:57:38.480+0800 I STORAGE [initandlisten]
2019-12-29T14:57:38.480+0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine
2019-12-29T14:57:38.480+0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem
2019-12-29T14:57:38.480+0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=63910M,cache_overflow=(file_max=0M),session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
2019-12-29T14:57:39.170+0800 I STORAGE [initandlisten] WiredTiger message [1577602659:170061][14523:0x7f45cdb1fb40], txn-recover: Recovering log 647 through 648
2019-12-29T14:57:39.252+0800 I STORAGE [initandlisten] WiredTiger message [1577602659:252117][14523:0x7f45cdb1fb40], txn-recover: Recovering log 648 through 648
2019-12-29T14:57:39.348+0800 I STORAGE [initandlisten] WiredTiger message [1577602659:348149][14523:0x7f45cdb1fb40], txn-recover: Main recovery loop: starting at 647/29810304 to 648/256
2019-12-29T14:57:39.439+0800 I STORAGE [initandlisten] WiredTiger message [1577602659:439334][14523:0x7f45cdb1fb40], txn-recover: Recovering log 647 through 648
2019-12-29T14:57:39.510+0800 I STORAGE [initandlisten] WiredTiger message [1577602659:510688][14523:0x7f45cdb1fb40], txn-recover: Recovering log 648 through 648
2019-12-29T14:57:39.584+0800 I STORAGE [initandlisten] WiredTiger message [1577602659:584200][14523:0x7f45cdb1fb40], txn-recover: Set global recovery timestamp: (0,0)
2019-12-29T14:57:39.824+0800 I RECOVERY [initandlisten] WiredTiger recoveryTimestamp. Ts: Timestamp(0, 0)
2019-12-29T14:57:39.837+0800 I STORAGE [initandlisten] Timestamp monitor starting
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten]
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten]
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** WARNING: This server is bound to localhost.
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** Remote systems will be unable to connect to this server.
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** Start the server with --bind_ip

to specify which IP
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** addresses it should serve responses from, or with --bind_ip_all to
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** bind to all interfaces. If this behavior is desired, start the
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** server with --bind_ip 127.0.0.1 to disable this warning.
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten]
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten]
2019-12-29T14:57:39.917+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 514532 processes, 1048576 files. Number of processes should be at least 524288 : 0.5 times number of files.
2019-12-29T14:57:39.934+0800 I SHARDING [initandlisten] Marking collection local.system.replset as collection version:
2019-12-29T14:57:39.937+0800 I STORAGE [initandlisten] Flow Control is enabled on this deployment.
2019-12-29T14:57:39.937+0800 I SHARDING [initandlisten] Marking collection admin.system.roles as collection version:
2019-12-29T14:57:39.937+0800 I SHARDING [initandlisten] Marking collection admin.system.version as collection version:
2019-12-29T14:57:39.939+0800 I SHARDING [initandlisten] Marking collection local.startup_log as collection version:
2019-12-29T14:57:39.939+0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/mnt/4t3/mongodb_eos/db/diagnostic.data'
2019-12-29T14:57:39.941+0800 I NETWORK [initandlisten] Listening on /tmp/mongodb-27017.sock
2019-12-29T14:57:39.941+0800 I SHARDING [LogicalSessionCacheRefresh] Marking collection config.system.sessions as collection version:
2019-12-29T14:57:39.941+0800 I NETWORK [initandlisten] Listening on 127.0.0.1
2019-12-29T14:57:39.941+0800 I NETWORK [initandlisten] waiting for connections on port 27017
2019-12-29T14:57:39.942+0800 I SHARDING [LogicalSessionCacheReap] Marking collection config.transactions as collection version:
2019-12-29T14:57:40.000+0800 I SHARDING [ftdc] Marking collection local.oplog.rs as collection version:
2019-12-29T14:59:31.367+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53312 #1 (1 connection now open)
2019-12-29T14:59:31.370+0800 I NETWORK [conn1] received client metadata from 127.0.0.1:53312 conn1: { driver: { name: "nodejs", version: "3.4.1" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.0-170-generic" }, platform: "Node.js v12.12.0, LE" }
2019-12-29T14:59:31.380+0800 I SHARDING [conn1] Marking collection EOS.transaction_traces as collection version:
2019-12-29T14:59:31.550+0800 I COMMAND [conn1] command EOS.transaction_traces command: find { find: "transaction_traces", filter: { id: "cb182ab6744234d0c55aaa8120dfb69d1a696c0ed596d5f0f00da98da6c25835" }, returnKey: false, showRecordId: false, lsid: { id: UUID("786a5c76-ed58-4c1c-a916-430611dddc34") }, $db: "EOS" } planSummary: IXSCAN { id: 1, _id: 1 } keysExamined:1 docsExamined:1 cursorExhausted:1 numYields:1 nreturned:1 queryHash:6DAB46EC planCacheKey:801B9D84 reslen:5219 locks:{ ReplicationStateTransition: { acquireCount: { w: 2 } }, Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 2 } }, Collection: { acquireCount: { r: 2 } }, Mutex: { acquireCount: { r: 1 } } } storage:{ data: { bytesRead: 132627, timeReadingMicros: 136 } } protocol:op_msg 169ms
2019-12-29T14:59:31.558+0800 I NETWORK [conn1] end connection 127.0.0.1:53312 (0 connections now open)
2019-12-29T14:59:53.590+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53314 #2 (1 connection now open)
2019-12-29T14:59:53.593+0800 I NETWORK [conn2] received client metadata from 127.0.0.1:53314 conn2: { driver: { name: "nodejs", version: "3.4.1" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.0-170-generic" }, platform: "Node.js v12.12.0, LE" }
2019-12-29T14:59:53.610+0800 I NETWORK [conn2] end connection 127.0.0.1:53314 (0 connections now open)
2019-12-29T15:13:31.126+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53320 #3 (1 connection now open)
2019-12-29T15:13:31.126+0800 I NETWORK [conn3] received client metadata from 127.0.0.1:53320 conn3: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "4.2.2" }, os: { type: "Linux", name: "Ubuntu", architecture: "x86_64", version: "16.04" } }
2019-12-29T15:32:14.777+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53322 #4 (2 connections now open)
2019-12-29T15:32:14.777+0800 I NETWORK [conn4] received client metadata from 127.0.0.1:53322 conn4: { driver: { name: "mongoc / mongocxx", version: "1.13.0 / 3.4.0" }, os: { type: "Linux", name: "Ubuntu", version: "16.04", architecture: "x86_64" }, platform: "cfg=0x215620c9 posix=200809 stdc=201112 CC=clang 8.0.1 (https://git.llvm.org/git/clang.git a03da8be08a208122e292016cb6cea1f30229677) (https://git.llvm..." }
2019-12-29T15:32:14.778+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53324 #5 (3 connections now open)
2019-12-29T15:32:14.778+0800 I NETWORK [conn5] received client metadata from 127.0.0.1:53324 conn5: { driver: { name: "mongoc / mongocxx", version: "1.13.0 / 3.4.0" }, os: { type: "Linux", name: "Ubuntu", version: "16.04", architecture: "x86_64" }, platform: "cfg=0x215620c9 posix=200809 stdc=201112 CC=clang 8.0.1 (https://git.llvm.org/git/clang.git a03da8be08a208122e292016cb6cea1f30229677) (https://git.llvm..." }
2019-12-29T15:32:14.778+0800 I SHARDING [conn5] Marking collection EOS.accounts as collection version:
2019-12-29T15:32:14.864+0800 I SHARDING [conn5] Marking collection EOS.pub_keys as collection version:
2019-12-29T15:34:31.885+0800 I SHARDING [conn5] Marking collection EOS.account_controls as collection version:
2019-12-29T15:47:40.117+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53326 #6 (4 connections now open)
2019-12-29T15:47:40.120+0800 I NETWORK [conn6] received client metadata from 127.0.0.1:53326 conn6: { driver: { name: "nodejs", version: "3.4.1" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.0-170-generic" }, platform: "Node.js v12.12.0, LE" }
2019-12-29T15:47:40.137+0800 I NETWORK [conn6] end connection 127.0.0.1:53326 (3 connections now open)
2019-12-29T16:46:24.763+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 8333, timeReadingMicros: 14 } } protocol:op_msg 48681ms
2019-12-29T16:50:09.701+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 146454ms
2019-12-29T16:50:12.191+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 105ms
2019-12-29T16:51:17.846+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 1962ms
2019-12-29T16:51:23.574+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 694ms
2019-12-29T16:51:33.889+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 4003ms
2019-12-29T16:51:43.062+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 5997ms
2019-12-29T16:52:21.875+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53328 #7 (4 connections now open)
2019-12-29T16:56:21.301+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 273741ms
2019-12-29T16:56:21.301+0800 I NETWORK [conn7] received client metadata from 127.0.0.1:53328 conn7: { driver: { name: "nodejs", version: "3.4.1" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.0-170-generic" }, platform: "Node.js v12.12.0, LE" }
2019-12-29T16:56:21.301+0800 I NETWORK [conn7] end connection 127.0.0.1:53328 (3 connections now open)
2019-12-29T16:56:55.133+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53330 #8 (4 connections now open)
2019-12-29T16:56:55.136+0800 I NETWORK [conn8] received client metadata from 127.0.0.1:53330 conn8: { driver: { name: "nodejs", version: "3.4.1" }, os: { type: "Linux", name: "linux", architecture: "x64", version: "4.4.0-170-generic" }, platform: "Node.js v12.12.0, LE" }
2019-12-29T16:56:55.153+0800 I NETWORK [conn8] end connection 127.0.0.1:53330 (3 connections now open)
2019-12-29T16:57:29.026+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 801ms
2019-12-29T16:57:33.047+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 2405ms
2019-12-29T16:57:46.772+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 12961, timeReadingMicros: 16 } } protocol:op_msg 346ms
2019-12-29T17:02:52.617+0800 I NETWORK [listener] connection accepted from 127.0.0.1:53332 #9 (4 connections now open)
2019-12-29T17:03:26.230+0800 I COMMAND [conn5] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{} protocol:op_msg 333662ms
2019-12-29T17:03:26.230+0800 I NETWORK [conn5] end connection 127.0.0.1:53324 (3 connections now open)
2019-12-29T17:03:26.231+0800 I NETWORK [conn9] received client metadata from 127.0.0.1:53332 conn9: { driver: { name: "mongoc / mongocxx", version: "1.13.0 / 3.4.0" }, os: { type: "Linux", name: "Ubuntu", version: "16.04", architecture: "x86_64" }, platform: "cfg=0x215620c9 posix=200809 stdc=201112 CC=clang 8.0.1 (https://git.llvm.org/git/clang.git a03da8be08a208122e292016cb6cea1f30229677) (https://git.llvm..." }
2019-12-29T17:03:28.663+0800 I COMMAND [conn9] command EOS.transaction_traces command: insert { insert: "transaction_traces", ordered: true, $db: "EOS", lsid: { id: UUID("0348b595-1507-44ef-85c0-565de90bf153") } } ninserted:1 keysInserted:2 numYields:0 reslen:45 locks:{ ParallelBatchWriterMode: { acquireCount: { r: 1 } }, ReplicationStateTransition: { acquireCount: { w: 1 } }, Global: { acquireCount: { w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } }, Mutex: { acquireCount: { r: 2 } } } flowControl:{ acquireCount: 1 } storage:{ data: { bytesRead: 15236, timeReadingMicros: 15 } } protocol:op_msg 582ms
2019-12-29T17:03:30.462+0800 I NETWORK [conn9] end connection 127.0.0.1:53332 (2 connections now open)
2019-12-29T17:03:30.462+0800 I NETWORK [conn4] end connection 127.0.0.1:53322 (1 connection now open)

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou I also found this error happened when the RAM used by Mongod is 10GB. My server is 120GB RAM

@fufesou
Copy link
Author

fufesou commented Dec 29, 2019

@fufesou I also found this error happened when the RAM used by Mongod is 10GB. My server is 120GB RAM

I cannot find any problems in your log. Is there any log just when nodeos was shutting down?

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou This the end of log. But I cannot find something.
Could this be caused by an error in my config.ini configuration? Nodeos replays transactions after startup. Generally, it can synchronize the trace of tens of thousands of transactions to mongodb, and then hang up.
image

@fufesou
Copy link
Author

fufesou commented Dec 29, 2019

@fufesou This the end of log. But I cannot find something.
Could this be caused by an error in my config.ini configuration? Nodeos replays transactions after startup. Generally, it can synchronize the trace of tens of thousands of transactions to mongodb, and then hang up.
image

mongodb hung up at 19:05? or mongodb_plugin shutdown at 19:05? sorry, I cannot find any error here.

@weimumu
Copy link

weimumu commented Dec 29, 2019

The last log of the official mongod in the red box. mongod did not hang, while nodeos hangs during replay.

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou This the end of log. But I cannot find something.
Could this be caused by an error in my config.ini configuration? Nodeos replays transactions after startup. Generally, it can synchronize the trace of tens of thousands of transactions to mongodb, and then hang up.
image

mongodb hung up at 19:05? or mongodb_plugin shutdown at 19:05? sorry, I cannot find any error here.

The last log of the official mongod in the red box. mongod did not hang, while nodeos hangs during replay.

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou
My situation is similar to yours, but I don't know how you solved this problem. The log of nodeos is similar to yours.
image

@fufesou
Copy link
Author

fufesou commented Dec 29, 2019

@fufesou
My situation is similar to yours, but I don't know how you solved this problem. The log of nodeos is similar to yours.
image

Yes. I also think it is caused by mongod service.
Maybe you can try to insert some documents after nodeos shutting down, while mongod still works.

@weimumu
Copy link

weimumu commented Dec 29, 2019

@fufesou
My situation is similar to yours, but I don't know how you solved this problem. The log of nodeos is similar to yours.
image

Yes. I also think it is caused by mongod service.
Maybe you can try to insert some documents after nodeos shutting down, while mongod still works.

How did you solve these problems in the first place? What version of mongod and nodeos do you use?

@fufesou
Copy link
Author

fufesou commented Dec 29, 2019

@fufesou
My situation is similar to yours, but I don't know how you solved this problem. The log of nodeos is similar to yours.
image

Yes. I also think it is caused by mongod service.
Maybe you can try to insert some documents after nodeos shutting down, while mongod still works.

How did you solve these problems in the first place? What version of mongod and nodeos do you use?

I choosed to use history_plugin instead of mongodb plugin at last.

@heifner
Copy link
Contributor

heifner commented Jan 2, 2020

@fufesou I believe this is some kind of mongo error. Maybe seek help from mongo and report back.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants