Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

Parity 1.7.6 - 1.8.0 stuck syncing #6787

Closed
AdvancedStyle opened this issue Oct 16, 2017 · 25 comments
Closed

Parity 1.7.6 - 1.8.0 stuck syncing #6787

AdvancedStyle opened this issue Oct 16, 2017 · 25 comments
Labels
F2-bug 🐞 The client fails to follow expected behavior. M4-core ⛓ Core client code / Rust. P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible.
Milestone

Comments

@AdvancedStyle
Copy link

  • Parity version: 1.7.6 and 1.8.0
  • Ubuntu 16.04.3 LTS
  • Release version 1.7.6 and 1.8.0

After the fork I'm having issues with parity getting stuck in "syncing" status. I've tested in both 1.7.6 and 1.8.0 and the issue happens randomly after running fine for about 30 minutes. All of a sudden the peers dropped from 4 to 1 and then stuck in "syncing" the same block.

Here's an example of the parity output where the "Syncing" occurs.

2017-10-16 16:04:28     4/25 peers   181 MiB chain 67 MiB db 0 bytes queue 23 KiB sync  RPC:  0 conn, 13 req/s, 4720 µs
2017-10-16 16:04:32  Imported #4370545 2a1e…77d1 (164 txs, 6.70 Mgas, 207.03 ms, 25.38 KiB)
2017-10-16 16:05:06     1/25 peers   184 MiB chain 66 MiB db 0 bytes queue 23 KiB sync  RPC:  0 conn,  1 req/s, 6545 µs
2017-10-16 16:05:36  Reorg to #4370546 2291…5759 (2a1e…77d1 #4370544 d546…fe82 9fe2…9167)
2017-10-16 16:05:45     1/25 peers   184 MiB chain 67 MiB db 107 KiB queue 23 KiB sync  RPC:  0 conn,  1 req/s, 7917 µs
2017-10-16 16:06:05  Syncing #4370549 c5d9…449e     0 blk/s    0 tx/s   0 Mgas/s      0+    4 Qed  #4370553    1/25 peers   185 MiB chain 67 MiB db 309 KiB queue 23 KiB sync  RPC:  0 conn,  2 req/s, 7221 µs
2017-10-16 16:06:16  Syncing #4370549 c5d9…449e     0 blk/s    0 tx/s   0 Mgas/s      0+    4 Qed  #4370553    1/25 peers   185 MiB chain 67 MiB db 309 KiB queue 23 KiB sync  RPC:  0 conn,  0 req/s, 7221 µs
2017-10-16 16:06:33  Syncing #4370549 c5d9…449e     0 blk/s    0 tx/s   0 Mgas/s      0+    5 Qed  #4370554    1/25 peers   185 MiB chain 67 MiB db 358 KiB queue 23 KiB sync  RPC:  0 conn,  8 req/s, 55110 µs
2017-10-16 16:06:46  Syncing #4370549 c5d9…449e     0 blk/s    0 tx/s   0 Mgas/s      0+    5 Qed  #4370554    1/25 peers   185 MiB chain 67 MiB db 358 KiB queue 23 KiB sync  RPC:  0 conn, 23 req/s, 243 µs
2017-10-16 16:06:55  Syncing #4370549 c5d9…449e     0 blk/s    0 tx/s   0 Mgas/s      0+    5 Qed  #4370554    1/25 peers   185 MiB chain 67 MiB db 358 KiB queue 23 KiB sync  RPC:  0 conn,  8 req/s, 4613 µs

@5chdn 5chdn added the Z1-question 🙋‍♀️ Issue is a question. Closer should answer. label Oct 16, 2017
@5chdn
Copy link
Contributor

5chdn commented Oct 16, 2017

probably a network issue. what's your configuration?

@diegopeleteiro
Copy link

diegopeleteiro commented Oct 16, 2017

I can reproduce this with version 1.7.7
We run at the kovan chain with --no-warp

Interesting though is:
we have 3 machines running with the parity client, 2 of them are stuck, 1 is syncing.
All machines have the same confs and are located on different subnets.

@AdvancedStyle
Copy link
Author

AdvancedStyle commented Oct 16, 2017

Issue is also present is 1.9.0-unstable

Configuration is:

parity --force-sealing --allow-ips public --cache-size 4096 --jsonrpc-apis web3,eth,net,parity,traces,rpc,personal --geth

(--force-sealing and --allow-ips public was added in order to try to fix the issue, but does not seem to have helped)

@AdvancedStyle
Copy link
Author

Here's a sample from the -lsync=trace output once parity has basically lost all peers and is no longer syncing blocks:

Worker #3 TRACE sync  Status packet from expired session 32:Geth/v1.6.7-unstable-4b8860a7/linux-amd64/go1.8.3
2017-10-17 10:44:53  IO Worker #1 TRACE sync  == Connected 83: Parity/v1.8.0-unstable-dd36b4c-20171006/x86_64-linux-gnu/rustc1.20.0
2017-10-17 10:44:53  IO Worker #1 TRACE sync  Sending status to 83, protocol version 2
2017-10-17 10:44:53  IO Worker #0 TRACE sync  == Connected 85: Geth/v1.6.1-unstable-cad07100/linux-amd64/go1.7.3
2017-10-17 10:44:53  IO Worker #0 TRACE sync  Sending status to 85, protocol version 63
2017-10-17 10:44:53  IO Worker #3 TRACE sync  Status timeout 50
2017-10-17 10:44:55  IO Worker #2 TRACE sync  == Connected 2: Geth/v1.6.7-stable-ab5646c5/linux-amd64/go1.8.1
2017-10-17 10:44:55  IO Worker #2 TRACE sync  Sending status to 2, protocol version 63
2017-10-17 10:44:55  IO Worker #1 TRACE sync  == Disconnecting 83: Parity/v1.8.0-unstable-dd36b4c-20171006/x86_64-linux-gnu/rustc1.20.0
2017-10-17 10:44:55  IO Worker #0 TRACE sync  Status timeout 50
2017-10-17 10:44:57  IO Worker #3 TRACE sync  New peer 2 (protocol: 63, network: 511337, difficulty: Some(2227506255382), latest:0f5d…a291, genesis:6577…44e1, snapshot:None)
2017-10-17 10:44:57  IO Worker #3 TRACE sync  Peer 2 genesis hash mismatch (ours: d4e5…8fa3, theirs: 6577…44e1)
2017-10-17 10:44:57  IO Worker #2 TRACE sync  Status timeout 50
2017-10-17 10:44:57  IO Worker #0 TRACE sync  == Disconnecting 85: Geth/v1.6.1-unstable-cad07100/linux-amd64/go1.7.3
2017-10-17 10:44:57  IO Worker #1 TRACE sync  New peer 85 (protocol: 63, network: 3762, difficulty: Some(1884019), latest:1689…3679, genesis:1b8d…0f6c, snapshot:None)
2017-10-17 10:44:57  IO Worker #1 TRACE sync  Status packet from expired session 85:Geth/v1.6.1-unstable-cad07100/linux-amd64/go1.7.3
2017-10-17 10:44:57  IO Worker #3 TRACE sync  == Disconnecting 2: Geth/v1.6.7-stable-ab5646c5/linux-amd64/go1.8.1
2017-10-17 10:44:57  IO Worker #2 TRACE sync  Status timeout 50
2017-10-17 10:44:59  IO Worker #0 TRACE sync  == Connected 72: Geth/v4.0.0/linux/go1.8
2017-10-17 10:44:59  IO Worker #0 TRACE sync  Sending status to 72, protocol version 63
2017-10-17 10:44:59  IO Worker #0 DEBUG sync  Error sending status request: Expired
2017-10-17 10:44:59  IO Worker #0 TRACE sync  == Connected 75: Geth/v1.5.9-stable-3c26ec40/linux/go1.7.4
2017-10-17 10:44:59  IO Worker #0 TRACE sync  Sending status to 75, protocol version 63
2017-10-17 10:44:59  IO Worker #1 TRACE sync  == Connected 53: Geth/v1.6.0-stable-facc47cb/linux-amd64/go1.7.3
2017-10-17 10:44:59  IO Worker #1 TRACE sync  Sending status to 53, protocol version 63
2017-10-17 10:44:59  IO Worker #3 TRACE sync  == Disconnecting 72: Geth/v4.0.0/linux/go1.8
2017-10-17 10:44:59  IO Worker #0 TRACE sync  == Connected 80: Geth/v1.6.7-unstable-4b8860a7/linux-amd64/go1.8.3
2017-10-17 10:44:59  IO Worker #0 TRACE sync  Sending status to 80, protocol version 63
2017-10-17 10:44:59  IO Worker #0 TRACE sync  Status timeout 50
2017-10-17 10:45:01  IO Worker #3 TRACE sync  == Disconnecting 80: Geth/v1.6.7-unstable-4b8860a7/linux-amd64/go1.8.3
2017-10-17 10:45:01  IO Worker #2 TRACE sync  == Disconnecting 75: Geth/v1.5.9-stable-3c26ec40/linux/go1.7.4
2017-10-17 10:45:01  IO Worker #1 TRACE sync  Status timeout 50
2017-10-17 10:45:01  IO Worker #0 TRACE sync  Status timeout 50
2017-10-17 10:45:03  IO Worker #1 TRACE sync  == Disconnecting 53: Geth/v1.6.0-stable-facc47cb/linux-amd64/go1.7.3
2017-10-17 10:45:03  IO Worker #2 TRACE sync  Status timeout 50
2017-10-17 10:45:03  IO Worker #3 TRACE sync  Status timeout 50
2017-10-17 10:45:04  IO Worker #1 TRACE sync  Status timeout 50
2017-10-17 10:45:05  IO Worker #0 TRACE sync  Status timeout 50
2017-10-17 10:45:05  IO Worker #3 TRACE sync  Status timeout 50
2017-10-17 10:45:06  IO Worker #1 TRACE sync  == Connected 91: Geth/v1.6.7-stable-ab5646c5/linux-amd64/go1.8.1
2017-10-17 10:45:06  IO Worker #1 TRACE sync  Sending status to 91, protocol version 63
2017-10-17 10:45:06  IO Worker #3 TRACE sync  == Connected 33: Parity/v1.8.0-beta-9882902-20171015/x86_64-linux-gnu/rustc1.20.0
2017-10-17 10:45:06  IO Worker #3 TRACE sync  Sending status to 33, protocol version 2
2017-10-17 10:45:06  IO Worker #2 TRACE sync  Status timeout 50
2017-10-17 10:45:08  IO Worker #0 TRACE sync  == Connected 19: Parity/v1.7.6-unstable-1953533-20171013/x86_64-linux-gnu/rustc1.19.0
2017-10-17 10:45:08  IO Worker #0 TRACE sync  Sending status to 19, protocol version 2
2017-10-17 10:45:08  IO Worker #2 TRACE sync  Status timeout 50



E sync  Syncing with peers: 1 active, 1 confirmed, 1 total
2017-10-17 10:47:55  IO Worker #3 TRACE sync  Skipping busy peer 22
2017-10-17 10:47:55  IO Worker #0 TRACE sync  Status timeout 91
2017-10-17 10:47:55  IO Worker #0 TRACE sync  Status timeout 16
2017-10-17 10:47:55  IO Worker #0 TRACE sync  Status timeout 50
2017-10-17 10:47:55  IO Worker #0 TRACE sync  Status timeout 93
2017-10-17 10:47:55  IO Worker #0 TRACE sync  22 <- Transactions (2 entries)
2017-10-17 10:47:55  IO Worker #0 DEBUG sync  Sent up to 2 transactions to 1 peers.
2017-10-17 10:47:55  IO Worker #0 TRACE sync  22 <- Transactions (20 entries)
2017-10-17 10:47:55  IO Worker #0 DEBUG sync  Sent up to 20 transactions to 1 peers.
2017-10-17 10:47:55  IO Worker #1 TRACE sync  == Disconnecting 71: Parity/v1.7.2-unstable/x86_64-linux-gnu/rustc1.20.0
2017-10-17 10:47:55  IO Worker #0 TRACE sync  Status timeout 91
2017-10-17 10:47:55  IO Worker #0 TRACE sync  Status timeout 16
2017-10-17 10:47:55  IO Worker #0 TRACE sync  Status timeout 50
2017-10-17 10:47:55  IO Worker #0 TRACE sync  Status timeout 93
2017-10-17 10:47:56  IO Worker #0 TRACE sync  22 <- Transactions (2 entries)
2017-10-17 10:47:56  IO Worker #0 DEBUG sync  Sent up to 2 transactions to 1 peers.
2017-10-17 10:47:56  IO Worker #0 TRACE sync  22 <- Transactions (20 entries)
2017-10-17 10:47:56  IO Worker #0 DEBUG sync  Sent up to 20 transactions to 1 peers.
2017-10-17 10:47:56  IO Worker #0 TRACE sync  == Disconnecting 1: Parity/v1.7.7-stable-eb7c648-20171015/x86_64-linux-gnu/rustc1.20.0
2017-10-17 10:47:56  IO Worker #2 TRACE sync  == Disconnecting 87: Parity/v1.7.7-stable-eb7c648-20171015/x86_64-linux-gnu/rustc1.20.0
2017-10-17 10:47:56  IO Worker #3 TRACE sync  == Disconnecting 60: Parity/v1.6.8-beta-c396229-20170608/x86_64-linux-gnu/rustc1.17.0
2017-10-17 10:47:56  IO Worker #1 DEBUG sync  Unexpected packet 1 from unregistered peer: 1:Parity/v1.7.7-stable-eb7c648-20171015/x86_64-linux-gnu/rustc1.20.0
2017-10-17 10:47:56  IO Worker #3 TRACE sync  Status timeout 91
2017-10-17 10:47:56  IO Worker #3 TRACE sync  Status timeout 16
2017-10-17 10:47:56  IO Worker #3 TRACE sync  Status timeout 50
2017-10-17 10:47:56  IO Worker #3 TRACE sync  Status timeout 93
2017-10-17 10:47:57  IO Worker #3 TRACE sync  22 <- Transactions (2 entries)
2017-10-17 10:47:57  IO Worker #3 DEBUG sync  Sent up to 2 transactions to 1 peers.
2017-10-17 10:47:57  IO Worker #3 TRACE sync  22 <- Transactions (20 entries)
2017-10-17 10:47:57  IO Worker #3 DEBUG sync  Sent up to 20 transactions to 1 peers.
2017-10-17 10:47:57  IO Worker #1 TRACE sync  Status timeout 91
2017-10-17 10:47:57  IO Worker #1 TRACE sync  Status timeout 16
2017-10-17 10:47:57  IO Worker #1 TRACE sync  Status timeout 50
2017-10-17 10:47:57  IO Worker #1 TRACE sync  Status timeout 93
2017-10-17 10:47:57  IO Worker #0 TRACE sync  Status timeout 91
2017-10-17 10:47:57  IO Worker #0 TRACE sync  Status timeout 16
2017-10-17 10:47:57  IO Worker #0 TRACE sync  Status timeout 50

@diegopeleteiro
Copy link

We got a fix from one of our colleagues, we are running with the following configs (in a docker container):

---
- name: parity container
  docker_container:
    name: parity
    image: parity/parity:v{{ parityVersion | default('1.6.9') }}
    pull: true
    network_mode: "host"
    volumes:
      - "/home/centos:/mnt"
    command: >
          --chain "{{ chain }}"
          --jsonrpc-hosts all  --jsonrpc-interface all --base-path /mnt
    restart_policy: unless-stopped

In {{chain}} we use kovan for testing

@AdvancedStyle
Copy link
Author

What part of that specifically was the fix?... i don't see how adding --jsonrpc-hosts all --jsonrpc-interface all would fix this issue?

@diegopeleteiro
Copy link

Hi, my bad, should have posted the original:

---
- name: parity container
  docker_container:
    name: parity
    image: parity/parity:v{{ parityVersion | default('1.6.9') }}
    pull: true
    published_ports:
      - "8080:8080"
      - "8180:8180"
      - "8545:8545"
    volumes:
      - "/home/centos:/mnt"
    command: >
          --chain "{{ chain }}"
          --jsonrpc-hosts all  --jsonrpc-interface all --base-path /mnt/ --no-warp
    restart_policy: unless-stopped

We removed the changed ports to use network_mode : host.
Also, we removed --no-warp

I hope that helps

@AdvancedStyle
Copy link
Author

I'm not using --no-wrap or docker, so still looking for a fix.

@AdvancedStyle
Copy link
Author

Also I've tried running the connection through a VPN (in case there was some issue with the ISP), but still same issue.

@5chdn
Copy link
Contributor

5chdn commented Oct 18, 2017

@diegopeleteiro that's an unrelated issue.

@AdvancedStyle could you remove ~/.local/share/io.parity.ethereum/chains/ethereum/network/nodes.json and try again?

@AdvancedStyle
Copy link
Author

I've deleted out the nodes.json and tried again but the number of peers is still really low. Seems to start out at about 8/25 and then eventually over time drops to 1 or 2/25 peers at which point it starts to run into problems keeping synced (i'm guess just because the 1 remaining peer is not providing the data is a timely manner).

Restarting parity syncs it back immediately due to initially be able to connect to 7 or 8 peers.

Is there a way I force more peer connections?

Note: ports 30301-30303 are forwarded at the router, but adding/removing this port forwarding doesn't really seem to make much of a difference as I guess it's utilizing the UPnP

I wasn't running into these issues until after the 4370000 hardfork, and had been running 1.7.6 for a few days before for the hardfork without an issue.

@AdvancedStyle
Copy link
Author

I notice that the "Public node URL" is the local network IP (enode://3f4cf050...9e2e@192.168.1.149:30303

Is this an issue? Should it be showing the public IP?

@AdvancedStyle
Copy link
Author

Just throwing ideas out there....

Is it possible the node is doing something to get banned/blocked by peers? It is sending out quite a lot of transactions (probably average 3 or 4 per minute). And have a script that re-submits the raw transaction a couple of times to make sure it sticks (have had issues with transactions not propagating and just disappearing).

Also node has a lot of keys (ie about 150,000+) however I don't see that this should have any effect on syncing with peers...but worth mentioning.

Disk IO and memory usage on the server seem low/normal.

@AdvancedStyle
Copy link
Author

The issue seems to somehow be related to reorgs and received conflicting blocks.

For example look at the console output below. Was running along with 11 peers, then I received 2 versions of block #4386288 with different hashes (first one being the valid one). After that syncing seemed to freeze for a while and my peer count drop in half:

 Imported #4386284 486e…360d (45 txs, 1.90 Mgas, 98.01 ms, 6.88 KiB)
2017-10-19 07:25:51  Imported #4386285 d2f9…6e31 (8 txs, 0.24 Mgas, 14.88 ms, 1.53 KiB)
2017-10-19 07:25:59  Imported #4386286 c831…5e0a (198 txs, 6.31 Mgas, 703.99 ms, 25.13 KiB)
2017-10-19 07:26:04    11/125 peers   6 KiB chain 66 MiB db 0 bytes queue 22 KiB sync  RPC:  0 conn,  0 req/s, 3946 µs
2017-10-19 07:26:38    11/125 peers   819 KiB chain 66 MiB db 0 bytes queue 22 KiB sync  RPC:  0 conn,  0 req/s, 3239 µs
2017-10-19 07:26:51  Imported #4386287 0350…e24d (8 txs, 0.18 Mgas, 16.93 ms, 1.49 KiB)
2017-10-19 07:27:07  Imported #4386288 8994…8ea4 (215 txs, 6.39 Mgas, 260.20 ms, 26.75 KiB)
2017-10-19 07:27:48  Imported #4386288 82fa…ee51 (220 txs, 6.60 Mgas, 106.57 ms, 27.37 KiB)
2017-10-19 07:28:02    11/125 peers   12 MiB chain 66 MiB db 0 bytes queue 22 KiB sync  RPC:  0 conn, 11 req/s,  84 µs
2017-10-19 07:29:40     6/125 peers   848 KiB chain 66 MiB db 0 bytes queue 25 KiB sync  RPC:  0 conn,  0 req/s, 3501 µs
2017-10-19 07:29:52  Imported #4386293 f247…2521 (42 txs, 1.18 Mgas, 703.32 ms, 5.92 KiB) + another 3 block(s) containing 296 tx(s)
2017-10-19 07:30:14     4/125 peers   1 MiB chain 67 MiB db 0 bytes queue 23 KiB sync  RPC:  0 conn,  0 req/s, 3501 µs

@5chdn 5chdn added the M4-core ⛓ Core client code / Rust. label Oct 19, 2017
@arkpar
Copy link
Collaborator

arkpar commented Oct 19, 2017

Looks like connections just timeout because all the IO handlers are busy enumerating account storage. Currently miner just reads all accounts from disk on every incoming transaction. This should be cached.

@arkpar arkpar added F2-bug 🐞 The client fails to follow expected behavior. and removed Z1-question 🙋‍♀️ Issue is a question. Closer should answer. labels Oct 19, 2017
@5chdn 5chdn added the P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible. label Oct 19, 2017
@5chdn 5chdn added this to the Patch milestone Oct 19, 2017
@jeremie-H
Copy link

jeremie-H commented Oct 19, 2017

Hello, i'm stuck too (and after many restart)
I would like to sync full node with config file :

Starting Parity/v1.7.7-stable-eb7c648-20171015/x86_64-linux-gnu/rustc1.20.0

[ui]
disable = true
[network]
port = 30304
min_peers = 15
max_peers = 20
nat = "extip:X.X.X.X"
[websockets]
hosts = ["localhost", ""]
port = 8548

[footprint]
db_compaction = "hdd"
cache_size = 512
pruning = "archive"
tracing = "on"
fat_db = "on"

[rpc]
port = 8544
hosts = ["all"]
interface = "192.168.1.10"

[ipc]
path = "/parity/blockchain-fatdb/jsonrpc.ipc"
[dapps]
path = "/parity/blockchain-fatdb/dapps"
[ipfs]
port = 5002

and i'm still stuck on block 2390979 :-/

2017-10-19 23:36:55  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+    0 Qed  #2390979    3/15 peers    183 KiB chain  0 bytes db  0 bytes queue   19 KiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:36:55  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+    0 Qed  #2390979    4/15 peers    269 KiB chain  0 bytes db  0 bytes queue   19 KiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:37:05  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+    0 Qed  #2390979    5/15 peers    893 KiB chain  0 bytes db  0 bytes queue  384 KiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:37:14  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+  370 Qed  #2391353    6/15 peers      2 MiB chain    9 KiB db    2 MiB queue  919 KiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:37:26  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+  758 Qed  #2391741    8/15 peers      2 MiB chain    9 KiB db    5 MiB queue    2 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:37:34  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 1582 Qed  #2392565   12/15 peers      2 MiB chain   14 KiB db   13 MiB queue    1 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:37:44  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 2011 Qed  #2392994   15/15 peers      2 MiB chain   14 KiB db   16 MiB queue    4 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:37:54  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s   1122+ 4332 Qed  #2396440   20/20 peers      2 MiB chain   19 KiB db   38 MiB queue    7 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:37:59  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 5457 Qed  #2396440   20/20 peers      3 MiB chain   24 KiB db   43 MiB queue   11 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:38:09  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 5457 Qed  #2396440   20/20 peers      3 MiB chain   29 KiB db   43 MiB queue   11 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:38:19  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 5457 Qed  #2396440   20/20 peers      3 MiB chain   39 KiB db   43 MiB queue   11 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:38:29  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 5457 Qed  #2396440   20/20 peers      4 MiB chain   44 KiB db   43 MiB queue   11 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:38:39  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 5457 Qed  #2396440   20/20 peers      4 MiB chain   54 KiB db   43 MiB queue   11 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:38:49  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 5457 Qed  #2396440   20/20 peers      4 MiB chain   64 KiB db   43 MiB queue   11 MiB sync  RPC:  0 conn,  0 req/s,   0 µs
2017-10-19 23:38:59  Syncing #2390979 5965…3bfe     0 blk/s    0 tx/s   0 Mgas/s      0+ 5457 Qed  #2396440   20/20 peers      4 MiB chain   69 KiB db   43 MiB queue   11 MiB sync  RPC:  0 conn,  0 req/s,   0 µs

I've tried to remove node.json, but same issue

@5chdn
Copy link
Contributor

5chdn commented Oct 20, 2017

@jeremie-H that's a different issue. are you on hdd?

@jeremie-H
Copy link

@5chdn Hello ! yes, i'm on hdd
db_compaction = "hdd"

@5chdn
Copy link
Contributor

5chdn commented Oct 20, 2017

@jeremie-H You are probably experiencing #6280 - Try to get an SSD or try a warp-sync.

@shao1555
Copy link
Contributor

shao1555 commented Jan 21, 2018

I have same issue and I think this issue is not related with insufficient I/O or CPU resources.

2018-01-21 08:44:20 UTC Imported #5488657 18f3…7f8b (0 txs, 0.00 Mgas, 22.20 ms, 0.57 KiB)
2018-01-21 08:44:28 UTC Imported #5488658 7e98…d9f5 (2 txs, 0.08 Mgas, 24.94 ms, 1.16 KiB)
2018-01-21 08:44:36 UTC Imported #5488659 9c86…6ca3 (1 txs, 0.17 Mgas, 29.52 ms, 0.80 KiB)
2018-01-21 08:44:39 UTC Stage 4 block verification failed for #5488660 (68fc…2f1d)
Error: Block(TemporarilyInvalid(OutOfBounds { min: None, max: Some(1516524276), found: 1516524280 }))
2018-01-21 08:44:48 UTC    0/25 peers      6 MiB chain   69 MiB db  0 bytes queue  654 KiB sync  RPC:  0 conn, 121 req/s, 584 µs
2018-01-21 08:45:18 UTC    0/25 peers      6 MiB chain   69 MiB db  0 bytes queue  654 KiB sync  RPC:  0 conn,  0 req/s,  72 µs
2018-01-21 08:45:48 UTC    0/25 peers      6 MiB chain   69 MiB db  0 bytes queue  654 KiB sync  RPC:  0 conn, 78 req/s,  71 µs
2018-01-21 08:46:18 UTC    0/25 peers      6 MiB chain   69 MiB db  0 bytes queue  654 KiB sync  RPC:  0 conn,  0 req/s,  72 µs

my environment is :

  • occurs v1.7.12 and v1.8.6
  • running on GKE + Parity Docker Image (also tested with Ubuntu 16.04 on GCE)
  • fully synced
  • running option: --db-path /data/chains/ --chain kovan --rpcport 8545 --rpcaddr 0.0.0.0

tried clear db and ~root/.local/, but does not resolve.

@5chdn can you reopen this issue?

@5chdn
Copy link
Contributor

5chdn commented Jan 22, 2018

@shao1555 fixed in #7613 - please downgrade to 1.8.5 or wait for 1.9.0

@pgrzesik
Copy link

@5chdn I have similar problems, which 1.7.x version should I downgrade to ? (I'm using 1.7.12 right now)

@5chdn
Copy link
Contributor

5chdn commented Jan 22, 2018

@pgrzesik 1.7.11 but note that 1.7.x will reach the end of life very soon.

@pgrzesik
Copy link

Thanks @5chdn !

@shao1555
Copy link
Contributor

@5chdn thank you!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
F2-bug 🐞 The client fails to follow expected behavior. M4-core ⛓ Core client code / Rust. P2-asap 🌊 No need to stop dead in your tracks, however issue should be addressed as soon as possible.
Projects
None yet
Development

No branches or pull requests

7 participants