-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Coretime interface #5
Conversation
Co-authored-by: asynchronous rob <rphmeier@gmail.com>
We'll need upward messages for full parachains who adjust their core usage but.. Afaik there is no reason to use upward messages in the scheduling system for on-demand parachains. A payment on the relay chain can increase the balance, and enqueue a message for the parachain, then whenever the parachain makes a block it can pull from this balance and do whatever it does with the message. |
We're planning to remove as much functionality as possible from the Relay-chain, including currency. This isn't much of a problem; non-transferable DOT "vouchers" can still be hosted on the Relay-chain and payment can happen in them instead, so it's pretty much the same thing. |
core: CoreIndex, | ||
begin: BlockNumber, | ||
assignment: Vec<(CoreAssignment, PartsOf57600)>, | ||
end_hint: Option<BlockNumber>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noting here - the main purpose of the None
end-hint is that it allows the broker chain to skip sending this message sometimes.
Some users of #1 will likely split their regions up and defer their allocate
calls until the last possible moment. When the next-up region has the same assignment as the previous, there's no need to send a new assign_core
message
Co-authored-by: asynchronous rob <rphmeier@gmail.com>
Co-authored-by: Anton Vilhelm Ásgeirsson <antonva@users.noreply.github.com>
fn assign_core( | ||
core: CoreIndex, | ||
begin: BlockNumber, | ||
assignment: Vec<(CoreAssignment, PartsOf57600)>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be clear, the caller here doesn't have the ability to provide information about how these resources are allocated - as in, whether multiple candidates from multiple paras are included every relay chain block, or if they take turns on relay-chain blocks.
That doesn't seem to be required in #1 or #3 , but if that needs to be specified later on then we'd have to adjust this interface in a future RFC.
blake2_256 hash of the 33d45a8 text is |
|
||
Instructs the Relay-chain to add the `amount` of DOT to the Instantaneous Coretime Market Credit account of `who`. | ||
|
||
It is expected that Instantaneous Coretime Market Credit on the Relay-chain is NOT transferrable and only redeemable when used to assign cores in the Instantaneous Coretime Pool. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do I get my credits back that I don't want to use in the market anymore as credits?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can't
|
||
``` | ||
assert!(core < core_count); | ||
assert!(targets.iter().map(|x| x.0).is_sorted()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Targets here means assignment
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If these are the assignments, why do they need to be sorted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They don't strictly need to be (depends on implementation), but it's better to start with a more constrained interface
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean it will depend on the implementation of how to sort CoreAssignment
, but this will probably mean that lower task numbers will be scheduled earlier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really don't get the need for sorting it. Especially when we want to express something like Task(1), Task(2)
should both use 1/2 of the same core at the same block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i have no opinion at all here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the comments from @rphmeier below, I see why the sorting doesn't has any effect
|
||
For `request_revenue_info`, a successful request should be possible if `when` is no less than the Relay-chain block number on arrival of the message less 100,000. | ||
|
||
For `assign_core`, a successful request should be possible if `begin` is no less than the Relay-chain block number on arrival of the message plus 10 and `workload` contains no more than 100 items. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
workload
meaning assigment
?
``` | ||
assert!(core < core_count); | ||
assert!(targets.iter().map(|x| x.0).is_sorted()); | ||
assert_eq!(targets.iter().map(|x| x.0).unique().count(), targets.len()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But doesn't this prevent interleaved execution of two parachains? Because then we would have multiple times the same CoreAssignment::Task
in this list?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interleaving would be something like
targets: Vec<(Task(1), 1/2), (Task(2), 1/2)>
Where: | ||
- `core_count` is assumed to be the sole parameter in the last received `notify_core_count` message. | ||
|
||
Instructs the Relay-chain to ensure that the core indexed as `core` is utilised for a number of assignments in specific ratios given by `assignment` starting as soon after `begin` as possible. Core assignments take the form of a `CoreAssignment` value which can either task the core to a `ParaId` value or indicate that the core should be used in the Instantaneous Pool. Each assignment comes with a ratio value, represented as the numerator of the fraction with a denominator of 57,600. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get how assignment
should work? How does the assignment and fraction map to scheduling the assignment to the core?
I mean for things like [(Task(1), 0.5), (Task(2), 0.5)]
task 1 and task 2 will be running interleaved all the time?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does the assignment and fraction map to scheduling the assignment to the core?
It means that the relay-chain should allow the task/assignment to use that proportion of the core's resources. It's better to leave it general like this, because this definition also allows tasks to use 1/2 of the core every relay-chain block, when we get support for having multiple candidates per core at a time.
And scheduling tasks to specific relay-chain blocks is actually really bad - this formulation does work nicely with e.g. #3 , which is a probabilistic scheduler. Giving access to core resources in expectation is the best way to do it, for both user experience and system utilization.
I mean for things like [(Task(1), 0.5), (Task(2), 0.5)] task 1 and task 2 will be running interleaved all the time?
Yes, or (Task(1), 0.25), (Task(2), 0.5), (Instantaneous, 0.25)
would mean that:
- Task 1 gets to use 1/4 of the core's resources for this timeframe (1 block out of every 4 relay chain blocks)
- Task 2 gets to use 1/2 of the core's resources (2 blocks out of every 4)
- Instantaneous gets to allocate 1 block every 4 relay chain blocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better to leave it general like this, because this definition also allows tasks to use 1/2 of the core every relay-chain block, when we get support for having multiple candidates per core at a time.
But how would that look like? Currently the fraction expresses how much blocks a task can build for a given time range. I assume this time range will be TIMESLICE
and both sides relay chain/brocker chain will be aware of this?
We will need something to tell the relay chain that a certain task actually needs only 1/2 of a core and not 1/2 of the time range.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's designed to be probabilistic - over a sufficiently long period it would be scheduled 1/2 of the time, and the Relay-chain is expected to be as fair as it can be, but practical constraints preclude the ability to state anything that is "perfectly fair all the time". We don't explicitly specify a timerange - that could push the relay-chain to be instructed to do things it's not practically capable of doing; the relay-chain is expected to just maximise the fairness over a minimal time range.
Practically speaking over the next 24 months of our technology, the timerange over which we'd expect the fairness to play out would be 80 relay-chain blocks. This timerange could possibly reduce as scheduling becomes tighter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But how would that look like? Currently the fraction expresses how much blocks a task can build for a given time range. I assume this time range will be TIMESLICE and both sides relay chain/brocker chain will be aware of this?
While it would be valid for the relay-chain side to interpret this interface as giving each chain 1/2 of the timeslice (e.g. A gets an hour of uninterrupted time, then B gets an hour), that would be a pretty bad scheduler. Scheduler implementations (like #3) should probably aim for probabilistic interleaving while minimizing starvation.
In the Agile Coretime model of the Polkadot Ubiquitous Computer, as proposed in RFC-1 and RFC-3, it is necessary for the allocating parachain (envisioned to be one or more pallets on a specialised Brokerage System Chain) to communicate the core assignments to the Relay-chain, which is responsible for ensuring those assignments are properly enacted.
This is a proposal for the interface which will exist around the Relay-chain in order to communicate this information and instructions.