-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement GM::GetVerifyingPredicate metadata instr #174
Conversation
The context of the execution should contain more info such as the index of the current predicate so we decouple that from the vm initialization. Also, its worthy to create a type that will encapsulate the behavior of the runtime predicates since it needs to store not only the program to be executed - so it can't be represented just as a memory slice - but the original input index that created the predicate. The input index is not required for the predicate execution since the program memory offset is a constant function of the consensus parameters, but it might be consumed by the user - as in the specs.
src/interpreter/initialization.rs
Outdated
@@ -13,11 +13,10 @@ use std::io; | |||
|
|||
impl<S> Interpreter<S> { | |||
/// Initialize the VM with a given transaction | |||
pub fn init(&mut self, predicate: bool, block_height: u32, tx: CheckedTransaction) -> Result<(), InterpreterError> { | |||
pub fn init(&mut self, block_height: u32, tx: CheckedTransaction) -> Result<(), InterpreterError> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Block height seems to be something that belongs to the context, and not to VM initialization. Should we tackle that in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we shouldn't be setting block_height on the VM at all, especially now that we moved the transaction validation logic out of VM initialization I don't think it would be used. The block height opcode should be able to just fetch it from interpreter storage as needed.
The tricky part is that we'll need to make sure the interpreter storage returns the height of the currently executing block before it gets saved, rather than the height of the previously executed block.
ref: #156
Since we already have a separate issue filed for removing block_height, I think it's fine to leave it as is on this PR since it'll be easier to remove than if it got bundled in with context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous implementation of the context isn't good. Also, the block height belongs to the context and should be fetched from the storage a single time (on script init since block height isn't valid for predicates).
We shouldn't fetch the block height from the storage during runtime because this is a fallible implementation from the storage:
src/interpreter/initialization.rs
Outdated
let block_height = self.storage.block_height().map_err(InterpreterError::from_io)?; | ||
|
||
self.init(predicate, block_height, tx) | ||
self.context = Context::Script; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nit] Should we also have a Create
context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its a bit odd indeed that we don't have a Create
context, but that is because it will never happen at runtime. For a Create
tx, the only executable code is the predicates, and these will be Predicate
context
The context getter of the interpreter should not have underlying logic since this increases the magic box effect of the implementation. Instead, the context should be update whenever the frame pointer register is updated; depending on the current context.
/// Initialize the VM for a predicate context | ||
pub fn init_predicate(&mut self, tx: CheckedTransaction) -> bool { | ||
self.context = Context::Predicate { | ||
program: Default::default(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why use a default value here instead of actually setting the real predicate from an idx?
The context of the execution should contain more info such as the index
of the current predicate so we decouple that from the vm initialization.
Also, its worthy to create a type that will encapsulate the behavior of
the runtime predicates since it needs to store not only the program to
be executed - so it can't be represented just as a memory slice - but
the original input index that created the predicate.
The input index is not required for the predicate execution since the
program memory offset is a constant function of the consensus
parameters, but it might be consumed by the user - as in the specs.