-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Panicked at 'header is last in set and contains standard change signal; must have justification #13168
Comments
This error was triggered when the node tried to generate a warp proof (to serve some peer on the network that is doing a warp sync). A warp proof consists of finality justifications for all blocks that change the authority set since genesis until the current block (or whatever block we are warping to). The node persists justifications for all these blocks since those are required to securily verify the authority set hand-offs. It seems that this node, for some reason, either didn't have the required justification on its database or it was corrupted and failed to decode. Is this something that you've seen happen more than once? |
And do you maybe have the relay chain db? |
Here is my moonriver.service file, if that helps. [Unit] [Service] [Install] |
@perltk45 can you provide the relay chain database? |
Do you want all 30gb? |
Sadly there is no better way 🙈 |
curl --output ksmcc3.tar.zst https://eu-central-1.unitedbloc.com://ksmcc3.tar.zst It is only 8.9G after compression. |
Let me know if you want the parachain db also. |
I thought this was due to the archive setting on the parachain, but I went back to the prior server running with archive mode on the parachain and found the same error! "Jan 10 18:15:35 Moon1 moonriver[453678]: Thread 'tokio-runtime-worker' panicked at 'header is last in set and contains standard change signal; must have justification; qed.', /root/.cargo/git/checkouts/substrate-189071a041b0d328/385446f/client/finality-grandpa/src/warp_proof.rs:132" |
@perltk45 but you reused the same relay chain db? This issue is not related to the parachain db. |
Yes, used the same relay chain db. |
@bkchr @perltk45 , i'm also having these on a non validator paritydb polkadot snapshot node, i'm running with |
Ahh, good that you mentioned this! The best for now would be to use |
I have removed the block-pruning from the settings, will report if the errors go away. So to be clear there is no real usage of block-pruning for now? Or is only block-pruning without state-pruning also a meaningfull option? |
Currently, not really, besides cleaning up the db. |
Is there an existing issue?
Experiencing problems? Have you tried our Stack Exchange first?
Description of bug
We had a report from a collator node (on Moonriver, Substrate v0.9.29) of it crashing (but being able to recover after restarting) with this error:
@bkchr do you have an idea if it is worth investigating ?
Steps to reproduce
No response
The text was updated successfully, but these errors were encountered: