-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Deployed event on MinterFilterV1 #482
Conversation
…dexing logic on V3 Engine contracts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This LGTM–though would ask that we wait for @yoshiwarab's LGTM as well before merging.
Also–should we consider bumping from V1 to V2 here? I would think probably not but may make our subgraph logic easier to rationalize about..?
@jakerockland - ah, this is a good idea, however we don't have a public variable that broadcasts the minter filter version (doh!). It would be a great use of minor version if we had that implemented. |
That definitely makes sense to me @ryley-o, thank you for considering–something to follow up on down the road! Would love to get hear @yoshiwarab's thought as well. |
@ryley-o this sounds good to me. I think instead of trying to track a specific block number, what do you think about adding the public version field to this minor version and doing a try catch in the subgraph handler assuming that if that field doesn't exist that it's the earlier version and should be indexed? |
@yoshiwarab - That would be nice, but an issue I see with that is if, for example, a partner wants to not use our MinterFilter abstraction, our subgraph would default to indexing the unknown minter as a MinterFilter (because the try/catch would fail). That being said, we can actually just fix the existing bugged MinterFilters by re-adding the (same) minter as the core contract's minter, after inducing an event on the MinterFilter (such as allowlisting a new minter). Since we know how to fix existing MinterFilters with on-chain calls, we can perform those on-chain calls to patch the MinterFilters deployed before block X, which then enables us to only index MinterFilters already in the store after block X. That keeps block-dependent behavior out of our subgraph 🤝 I should also definitely add a public getter to the MinterFilter so we can better-handle future upgrades 👍 Thus, I'll update this PR to include a pubic version on the MinterFilter, and I'll work with @jakerockland to ensure we run the fix on the existing MinterFilters before shipping a new subgraph 🙏 |
Description of the change
Add a
Deployed
event to IMinterFilterV0 to improve indexing logic available for V3 Engine contractsWe currently have an issue in our subgraph where we would like to determine if we should index an Engine partner's MinterSuite based on if their MinterFilter contract is in the subgraph store or not. This new event, along with a follow-on handler to add the MinterFilter to the store when the
Deployed
event is observed, will enable that process.Assuming this is implemented, a follow-on PR will be on subgraph, and we will implement block-number based logic to index all MinterFilters before this change is implemented (our current patched behavior), and will only index MinterFilters in a subgraph's config if on a block after this change is implemented.
Design Alternatives Considered
We could index all Engine contract minters as MinterFilters (which is our current patched behavior), but we could experience subgraph failures if we index all Engine minter systems by default.
We could change our deployment strategy to require a specific ordering such as a minter must be allowlisted on a MinterFilter before we set the MinterFilter to be the minter on a core contract (which adds the minter to the store due to the emitted event), but that system is not very robust.
This seemed like the cleanest system with minimal effort, especially since it will also work with our future plans of a consolidated minter suite.