-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node not syncing post mandatory upgrade #5807
Comments
I have the same problem, how to solve it? |
@bmalepaty @Lin-MayH
|
As per suggestion, I added config file too. This is the new docker compose file
Node is up but not syncing further 60348877 |
Peer /34.254.202.252:18888 Peer /44.208.138.167:18888 Keep syncing 61076290 block |
more log: 19:40:26.913 INFO [peerClient-10] net Receive message from peer: /95.217.62.144:18888, type: BLOCK
19:40:27.035 INFO [sync-handle-block] DB Pending tx size: 0.
|
@Lin-MayH
|
@bmalepaty |
Yes.I using LiteFullNode data |
@Lin-MayH |
Okay, originally I was using version 4.7.3.1. I just need to update the latest version |
Can you provide a detailed description of your upgrade process, whether there was a forced shutdown of Docker during the process, and whether this issue still persists after upgrading to version 4.7.4. |
I am too facing issues with the latest version, while trying syncing fullnode. I am running using jar directly, and not with docker. Commad:
This is my conf file Logs:
Also, I can see this issues in my logs:
Please let me know if anyone have any leads here. |
@adityaNirvana Try |
@halibobo1205 , thanks. It helped! |
@adityaNirvana Currently java-tron only supports jdk8, in order to avoid this kind of situation again, do you think it is necessary to add runtime environment detection? If the detection fails, the service exits. |
Java-tron version: 4.7.4 I'm encountering syncing issues when using certain LiteFullNode data, the blocks will stop at certain height. For data based on LiteFullNode0423(http://3.219.199.168/backup20240423/LiteFullNode_output-directory.tgz) starting from 04/23 , the data has synced successfully up till now. However, for previous data 0418 and 0420(http://3.219.199.168/backup20240420/LiteFullNode_output-directory.tgz), syncing stopped at blocks 60930076 and 61022526 respectively. Their logs were different. It looks like the issues are similar to LinMayH's above, but upgrading java-tron to the latest version 4.7.4 didn't help. Now I'm using data based on LiteFullNode0423, but I still can't fix the syncing with the previous data. It makes me very worried that this kind of problem may happen again. Logs60930077
61022527
|
@uiayl |
@uiayl |
Thanks for all your contribution to java-tron, this issue will be closed as no update for a long time. Please feel free to re-open it if you still see the issue, thanks. |
On starting this container, the block has stopped syncing. Use the default config - main_net_config.conf. I ran both LiteFullNode and FullNode, and the result is the same: it stops at a certain block and fails validation. I installed snapshots for each node version in different directories (FullNode, LiteFullNode): FullNode Snapshot - https://db-backup-frankurt.s3-eu-central-1.amazonaws.com/FullNode-62646271-4.7.5-output-directory.tgz podman run -d --name="java-tron" -v /mnt/vol1/data_full:/java-tron/output-directory -v /mnt/volume_lon1_02/logs:/java-tron/logs -p 8090:8090 -p 18888:18888 -p 18888:18888/udp -p 50051:50051 docker.io/tronprotocol/java-tron:GreatVoyage-v4.7.5 -jvm "{-Xmx27g -Xms27g}" java -version Error: 19:44:01.684 INFO [sync-handle-block] DB Block num: 62727236, re-push-size: 0, pending-size: 0, block-tx-size: 177, verify-tx-size: 177 |
I tried setting up a Tron Node through docker container
version: "3.7"
services:
node:
image: tronprotocol/java-tron:GreatVoyage-v4.7.4
restart: on-failure
ports:
- "7332:8090"
- "18888:18888"
- "50051:50051"
volumes:
- /TronDisk/output-directory/output-directory:/java-tron/output-directory
- /TronDisk/logs:/logs
container_name: tron_fullnode
On starting this container, block has stopped syncing at 60348877.
Can you please suggest on next steps?
The text was updated successfully, but these errors were encountered: