-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
old vow watch not woken up by fulfillment in new incarnation. #9126
Comments
After studying the problem, I'm now convinced the only issue is that the With So, unless we invest the time to add more functionality to the |
Yeah, maybe we could add a debug hook to liveslots that |
closes: #9302 refs: #9125, #9126 #9153 #9154, #9280 ## Description Upgrade while suspended at `await` points! Uses membrane to log and replay everything that happened before each upgrade. In the first incarnation, somewhere, using a ***closed*** async function argument ```js const wrapperFunc = asyncFlow( zone, 'funcName`, async (...) => {... await ...; ...}, ); ``` then elsewhere, as often as you'd like ```js const outcomeVow = wrapperFunc(...); ``` For all these `asyncFlow` calls that happened in the first incarnation, in the first crank of all later incarnations ```js asyncFlow( zone, 'funcName`, async (...) => {... await ...; ...}, ); ``` with async functions that reproduce the original's logged behavior. In these later incarnations, you only need to capture the returned `wrapperFunc` if you want to create new activations. Regardless, the old activations continue. #### Future Growth I designed this PR so it could grow in the following ways: - TODO: The membrane should use the `HandledPromise` API to make proper remote presences and handled promises, so that the guest function can use `E` on objects or promises it receives from the host as expected. I commented out the additional ops needed for these: `doSend` and `checkSend`. - TODO: Currently, I assume that the host side only presents vows, not promises. However, imported remote promises can be stored durably, and be durably watched, so the membrane should be extended to handle those. - TODO: We currently impose the restriction that the guest cannot export to the host guest-created remotables or promises. (It can pass back to the host remotables or promises it received from the host.) I commented out the additional ops needed for these: `doCall`, `checkReturn` and `checkThrow`. I wrote the `bijection` and `equate` subsystems so that old durable host wrappers can be hooked back up on replay to the corresponding new guest remotables and promises. ### Security Considerations Nothing enforces that the argument async function is closed, i.e., does not capture (lexically "close over") any direct access to mutable state or ability to cause effects. If it does, it still cannot harm anything but itself. But it -- or its replayings -- may be more confusable, and so more vulnerable to confusion attacks. Since this entire framework itself is written as a helper (well, a huge helper) with no special privilege, it cannot be used to do anything that could not have otherwise been done. It is not a source of new authority. See caveats in Upgrade Considerations about isolation of effects following a replay failure. ### Scaling Considerations We assume that the total number of async functions, i.e., calls to `asyncFlow`, are low cardinality. This is essential to the design, in exactly the same sense as our assumption that exoClasses are low cardinality. The RAM cost is proportional to the number of these. The current implementation further assumes that the total number of activations of these replayable async functions are low cardinality. This will likely be a scaling problem at some point, but is unlikely to be painful for the initial expected use cases. The current implementation imposes two simplifying restrictions that allow us to relieve some of this memory pressure: Currently, the async function argument cannot make and export new remotables or promises to the host. Thus, once its outcomeVow is settled, its job is done. There is very little more it can do that is observable. Thus, once this outcome is settled, the activation is shut down and most of the memory it held onto is dropped. Of the activations not shut down, they must replay from the beginning in each incarnation. If a given activation has a long history of past activity, this can become expensive. How do we verify in CI that when an asyncFlow is in use & when it has completed, resource usage in RAM & on disk meet our expectations? The PR assumes `low cardinality` of asyncFlows. what scale is `low cardinality` - Is 10^3, 10&5? What is the risk if cardinality is too high? ### Documentation Considerations For the normal developer documentation, `asyncFlow` should make things simpler and more like "just JavaScript". The membrane translates between host vows and guest promises, so the async function can simply `await` on promises without needing the `when` from `@agoric/vow`. ### Testing Considerations This PR is structured as a tower of building blocks, where I unit tested each as I went, in bottom up order, in order to build with confidence. Currently, each of these building blocks is also very paranoid about internal consistency checking, so I'd get early indications if I made a mistake. Probably some of this internal consistency checking can be reduced over time, as we gain more static confidence. This PR is currently using the fake upgrade testing framework from the `@agoric/zone` package. This caused bug #9126. Instead, we need to redo all these tests with a real upgrade testing framework, like the one in bootstrapTests. See https://github.com/Agoric/agoric-sdk/blob/master/packages/boot/test/bootstrapTests/test-zcf-upgrade.ts ### Upgrade Considerations The point. In an reviving incarnation, if the async function argument of ```js asyncFlow( zone, 'funcName`, async (...) => {... await ...; ...}, ); ``` fails to recapitulate the logs of each of its activations, those activations will not have done any damage. They will simply be stuck, with a diagnostic available via ```js adminAsyncFlow.getFailures(), ``` To unstick these, so those stuck activations can continue to make progress, upgrade again using an async function argument that does reproduce the logged behavior of each of its activations. #### Caveat: Imperfect isolation of effects following a replay failure Once a replay failure is detected, we attempt to isolate the replaying activation from its outside world, and to also shut down its further execution as soon as possible. But it may be in the midst of activity with a non-empty stack. Following a replay failure, we have no way to preemptively terminate it without any further execution of the activation. This further execution may therefore be arbitrarily confused. We simply try to isolate it as much as possible, immediately revoking all access it had through the membrane to all authority to cause observable effects. However, - We do not consider `console` logging activity to be an observable effect. These might be caused by diagnostics emitted by this framework in response to its "isolated" confused behavior. - Because we do not consider `console` logging to be an observable effect, we also allow this as an exception to our closed function rule. Messages it sends directly to the console are not logged, and can differ without causing replay failure. During its post-replay-failure confused execution, it can still directly log to the console. - It is not resource limited, so its post-replay confused execution can accidentally engage in resource exhaustion attacks, including infinite loops. However, the vat as a whole is resource limited. An infinite loop will eventually crash the vat, which can then be recovered with yet another upgrade. - Because of metering, an activation that executed successfully in a previous incarnation might not replay correctly, even if it doesn't cause any explicit side-effects. That's because metering is a hidden side-effect of any execution.
Do we actually need a hook to liveslots for this, or could this be done from the outside somehow? Can't we parse the baggage to get the watched promises, and simply inject the rejections like the kernel would do? The part that I'm unsure about is how we would know which promises were decided by the vat (we don't have a kernel or clist in these tests from what I understand). |
Yeah, that's a trick. The current tests don't have any clear boundary between "inside" and "outside" the vat, since both sides share the same compartment without designating what side each of them is on. If you have anything to sketch out about how I might solve this, the right place to do it is in const { fakeStore } = incarnation;
console.log('@@@ fakeStore.entries()); I don't know enough about liveslots to understand how to get ahold of the |
Oh I just realized these tests don't create a full liveslots talking to the kernel over simulated syscalls, but instead instantiate the virtual data part of liveslots only. As such I am not even sure there is any support for "message sends" and "external promise resolution". I'm really confused now.
If we looked from the outside, that'd require parsing the collection as encoded in vatStore. But since we're in a more custom environment, maybe we could get a hold of the collection more directly. Or we might could just hook into
|
Bug isolated and demonstrated at #9125 .
Isolates the bug that #9097 is stuck on. As isolated, the problem does reproduce.
I'm following the general testing pattern used in #9097, especially in test-async-flow.js, which is where I'm having this problem. There, see
agoric-sdk/packages/zone/test/async-flow/test-async-flow.js
Lines 181 to 182 in 9a00502
I shouldn't need the
asyncFlow.wake()
call. But if I remove it I get the same symptom isolated here:If I
watch
with an unresolved durable vow and durable watcher in the first incarnation, and then I resolve that durable vow in the second incarnation, the durable watcher that was watching that vow should now wake up. It doesn't.@michaelfig , is this a bug in watching durable vows across incarnations?
The text was updated successfully, but these errors were encountered: