You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With #2211 in place we need to have a plan what to do with the existing on-demand core. We plan on launching on-demand by the end of the year, therefore on-demand will already be in production when we are launching core time. We need a plan to seamlessly move any exiting on-demand orders over to the new system.
I am suggesting the following:
We start with an initial bulk core count of 1 directly, set in the migration of the new runtime. The core time chain starts with the same initial value (1).
Both relay chain and core time chain will be configured to have this core assigned to on-demand.
Those two will be initially set, no messages are exchanged to achieve this. (No delay)
With this we can have a migration that seamlessly uses the core time on-demand core as a replacement for the previously existing explicit on-demand core. More importantly the number of cores stays unaffected (not allowed to change during sessions).
Other considerations & Bootstrapping
The coretime chain itself must be bootstrapped. There must exist a core with the para id of the coretime chain, prior to launch.
Implementation
relay: Create migration, migrating from previous on-demand/legacy assignment provider to new bulk/legacy assignment provider, using one bulk core with a pre-configured on-demand assignment.
relay: Pre-configure assignments with system chains - including coretime chain!
relay: Remove system chains from the legacy parachain pallet.
core_time: Start with existing assignments for all system chains (matching the relay chain configuration).
core time: Launch core time with those same initial settings: 1. core, assigned to pool 100%.
relay: Migrate configuration:
Remove on-demand core count configuration
Add bulk core count configuration (to be set via UMP from the core time chain)
The text was updated successfully, but these errors were encountered:
We could delay it, but that only makes sense in my opinion if we want to launch coretime without on-demand/instantaneous support in the beginning. If we want to have it, then I don't see a real value in not doing this.
Given that the most crucial thing for coretime is to be able to replace legacy auction, I agree we should likely not treat on-demand as a blocker. If there are any road blocks here, we could just not support the Pool assignment type.
Actually even better, the above only needs to be done for Rococo (where on-demand has launched already). For Kusama we can directly go with the coretime assigner: Just have it configured to have one core which is assigned to pool/on-demand and it is equivalent to launching the previous assigner.
eskimor
changed the title
Migration of existing on-demand to coretime
Migration of existing on-demand to coretime and initial settings
Nov 14, 2023
eskimor
changed the title
Migration of existing on-demand to coretime and initial settings
Migrations to coretime and initial settings & system chains
Nov 14, 2023
With #2211 in place we need to have a plan what to do with the existing on-demand core. We plan on launching on-demand by the end of the year, therefore on-demand will already be in production when we are launching core time. We need a plan to seamlessly move any exiting on-demand orders over to the new system.
I am suggesting the following:
Those two will be initially set, no messages are exchanged to achieve this. (No delay)
With this we can have a migration that seamlessly uses the core time on-demand core as a replacement for the previously existing explicit on-demand core. More importantly the number of cores stays unaffected (not allowed to change during sessions).
Other considerations & Bootstrapping
The coretime chain itself must be bootstrapped. There must exist a core with the para id of the coretime chain, prior to launch.
Implementation
The text was updated successfully, but these errors were encountered: