You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
At the moment, SPARK nodes pick tasks from the list of per-round tasks at random and the evaluation service rewards all measurements completed.
Let's rework this part to follow what we designed in SPARK Tasking v2.
At a high level:
Within each subnet and for each job, assign the reward to the node inside that subnet that is “closest” to the job, using the XOR distance between the node’s public key and the hash of the job definition. (See below for details.)
The checker node should ask the tasker for the DRAND randomness, committee index and the list of tasks for the current SPARK round. Then, it should pick TN closes tasks from the list, using XOR against the checker's public key as the distance metric. (See the Notion doc for more details.)
The evaluation service should repeat the same process to decide which measurements will be further used for verifications of proofs and eligible for rewards.
ETA: 2024-07-31
At the moment, SPARK nodes pick tasks from the list of per-round tasks at random and the evaluation service rewards all measurements completed.
Let's rework this part to follow what we designed in SPARK Tasking v2.
At a high level:
TN
closes tasks from the list, using XOR against the checker's public key as the distance metric. (See the Notion doc for more details.)Prerequisites:
Tasks
The text was updated successfully, but these errors were encountered: