Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Event improvment: independent from frame-system, stored in multiples values, ensure not in PoV #309

Open
gui1117 opened this issue Aug 3, 2021 · 11 comments
Labels
T1-FRAME This PR/Issue is related to core FRAME, the framework.

Comments

@gui1117
Copy link
Contributor

gui1117 commented Aug 3, 2021

From last calls here are the point we highlighted for event:

  • isolate the deposit of event in a new crate: we want it be able to store event more easily without depending on frame-system.
    actually I am personally not sure of this one, frame-system seems to be an OK dependency to have everywhere, but I'm not very aware of XCM stuff so maybe it is better to isolate in a new crate: frame-event.

  • store in multiple values: so that we can store more event without having a too big value.
    the cost of having a too big value is only in the client, from the runtime we only add some bytes to a value. I don't have benchmark to know where the size of the value start to be critical for rockdb, paritydb and the client usage of the database.
    splitting is easy we store at most N event per value, once it is filled, we use a new value to store them.

  • the crate would ensure that no read of event is made before it is deleted. This is important for PoV, as we don't want event to be part of PoV.
    currently events are removed in system initialize, only on_runtime_upgrade is executed before. Better to move it before on_runtime_upgrade hook. Also I think we could move system initialize before on_runtime_upgrade.
    (maybe we can make the storage private, so pallet can't read from it (except in test with some function with explicit name))

@gui1117
Copy link
Contributor Author

gui1117 commented Aug 3, 2021

from my undestanding of PoV, events are currently not part of PoV as long as nobody read them in on_runtime_ugprade. This should be improved by moving on_runtime_ugprade after the deletion of event probably.
EDIT: anyhow pallet should rather not read the events, maybe we can make the storage private as well to discourage it

For the splitting of events in multiple values, I think we should first look into the limit of the database.

Isolating event deposit in its own crate can be done it should be quite easy, I would like to know an example usecase if possible

@bkchr
Copy link
Member

bkchr commented Aug 3, 2021

* Also I think we could move system initialize before on_runtime_upgrade.

Why? on_runtime_upgrade should be executed before anything else is done.

@gui1117 gui1117 changed the title Event improvment: indepent from frame-system, stored in multiples values, ensure not in PoV Event improvment: independent from frame-system, stored in multiples values, ensure not in PoV Aug 3, 2021
@gui1117
Copy link
Contributor Author

gui1117 commented Aug 9, 2021

unfortunately I couldn't solve this before my off days here are the step I will take to resolve this:

  • benchmark the cost of the size of a value in rocksdb/client: We need to know until which size of a value the performance of appending bytes to a value is getting worse. This will be used to estimate the number of values to store events in, and the limit of number of event to emit to keep sensible performance.

  • look into other deposit_event usages: If we want to move event related stuff into its own crate we probably still need to have something generic. Currently frame_system::Config::Event is the generic event that every pallet are converting to when depositing their event. If we want to create something like frame_event then we need to keep something generic and look if it makes more sense than just being generic over the runtime like currently in frame_system.
    An alternative could be some special case like if we allow to store XCM event and we enforce in FRAME that event variant 99 is reserved for XCM and then we can deposit xcm event without having to convert to a generic event.

  • refactor in a crate frame-event or in frame-system to the new store of events in multiple storage value. (basically we store up to N event in M storage value)

  • find a solution to ensure event doesn't go into PoV: for this there is multiple ways.

    • document strongly that no event should be added or read before frame_system::initialize. That mean no event should be added or read in on_runtime_upgrade.
    • move the removal of event just before on_runtime_upgrade to enforce this. (this is probably a bad idea considering on_runtime_upgrade is usually the first thing we want to execute, but it may be fine to do just this event removal before).
    • modify deposit event logic so that the first read/write of event before deletion does trigger the deletion. But this probably needs one more storage read per deposit_event so I don't think it is a good solution.
    • enforce that event can't be read in the runtime (except in tests with an explicit test_read_event function) and then always remove event from PoV: If the runtime never read events and always remove events one time during the block then all the events added before this removal don't require the previous value in the PoV. (in on_runtime_upgrade appending to the event of the previous block and appending to no events should be the same as it will be cleared in system::initialize anyway. But maybe my understanding of PoV is wrong).

    cc @shawntabrizi

@h4x3rotab
Copy link

h4x3rotab commented Aug 26, 2021

In Phala's implementation, we have separated our MessageQueue notification events from the system events (to walk-around the event decoding problem). We've made a separate Vec<> storage item, allowing to append to it, and kill it at on_initilaize just like the system events. For reasons, we don't want to merge it back to the system event.

My question is, can we also utilize the PoV erasure feature also for our customized event buffer?

@bkchr
Copy link
Member

bkchr commented Aug 26, 2021

What PoV erasure feature are you meaning?

@gui1117
Copy link
Contributor Author

gui1117 commented Aug 26, 2021

for now we don't plan any specific PoV erasure feature for event. When paritytech/substrate#8931 lands, the event won't be part of PoV if they are not read from the runtime.

(I believe runtime can still read event without putting them in the PoV as long as they do so after the first kill in on_initialize put I am not sure).

@bkchr
Copy link
Member

bkchr commented Aug 26, 2021

(I believe runtime can still read event without putting them in the PoV as long as they do so after the first kill in on_initialize put I am not sure).

Yes that is correct.

@juangirini juangirini self-assigned this May 5, 2023
@shawntabrizi
Copy link
Member

Not sure if this is said elsewhere, but what we need is actually to split the current events into two:

  • Events: Basically what we have now, which are stored in state and part of the state proof.
  • Logs: Another form of events which are not stored in state, but still provide help to nodes.

If I understand correctly, Ethereum, Cosmos, and probably many other blockchains do not place the overhead of events into the state.

I believe the main reason we events in state is to help support light clients. Specifically, if they are not in state, a light client would not be able to know if an event occured on chain.

However, many events are probably just "logs". Things that would be super useful to track in block explorers, or for fully fledged front-ends backed by a full node. But not super relevant for light clients. In fact, I would potentially argue that most events which are relevant to a light client, might actually be replaced with logic around just checking extrinsic and success.

@bkchr
Copy link
Member

bkchr commented Jun 19, 2023

  • Events: Basically what we have now, which are stored in state and part of the state proof.

  • Logs: Another form of events which are not stored in state, but still provide help to nodes.

We already have this, we have the events and then we got logs aka digests. So, it is actually the other way around as you described it. Logs/digests can be found in the header and are easily trackable by light clients as they don't need to look into the state to find out what kind of events have been posted.

On the long run we will also move events offchain: #245

@xlc
Copy link
Contributor

xlc commented Jun 19, 2023

We already have the real logs (the one we print out things). Logs are not stored anywhere but can be easily reproduced by re-execute the block. Right now they are mostly only used for debugging but there is no reason we can't use them for tracing purpose as well.

@kianenigma
Copy link
Contributor

kianenigma commented Jun 20, 2023

Based on @bkchr's comment, we actually have 3 levels of things:

  1. Digests
  2. Events
  3. Logs

Digests are information places in the header for light clients. We could have done the same for events as well, by putting a event_root in header.

Events as they stand now are not kept in state forever and are useful for light clients to get strong evidence that a certain transition happened.

What is being proposed here as Logs is something like Events but without the ability of any proof to be generated based on it. We can simply call them Unprovable Events, as opposed to the runtime events that are prove-able.

In the simplest case, it can be implement dead-simple via a new storage item that is deleted at the end of the block, and a new runtime api called fn get_unprovable_events() -> RuntimeEvent (the return type can be something different, or an arbitrary key value or something). The client can call this at the end of each block (prior to them being deleted), and then flush it out to stdout, as a HTTP endpoint or whatever other services need to listen to.


In any case, as far as I know this line of work, as described in the issue, will not be continued and we will focus more on #245

@juangirini juangirini transferred this issue from paritytech/substrate Aug 24, 2023
@the-right-joyce the-right-joyce added T1-FRAME This PR/Issue is related to core FRAME, the framework. and removed T1-runtime labels Aug 25, 2023
@juangirini juangirini removed their assignment Aug 25, 2023
claravanstaden pushed a commit to Snowfork/polkadot-sdk that referenced this issue Dec 8, 2023
helin6 pushed a commit to boolnetwork/polkadot-sdk that referenced this issue Feb 5, 2024
* Simple sync_block function for mapping_sync

* MetaDb operating functions

* Implement sync_one_block

* Implement sync_blocks

* Implement mapping sync worker

* Implement mapping sync worker

* Remove consensus-layer import code

* Fix genesis block syncing logic

* Fix importing logic
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T1-FRAME This PR/Issue is related to core FRAME, the framework.
Projects
Status: Backlog
Development

No branches or pull requests

8 participants