-
Notifications
You must be signed in to change notification settings - Fork 447
ClientSchedOctTen
Design document for changes to the client work fetch and job scheduling policies, started Oct 2010.
This supercedes the following design docs:
The current policies, described here, maintain long- and short-term debts for each (project, processor type) pair.
Job scheduling for a given processor type is based on STD. Projects with greater STD for the type are given priority.
Work fetch is based on a weighted sum of LTDs. Work is typically fetched from the project for which this sum is greatest, and typically work is requested for all processor types.
These policies may fail to enforce resource shares. Here are two scenarios that illustrate the underlying problems:
A host has a fast GPU and a slow CPU. Project A has apps for both GPU and CPU. Project B has apps only for CPU. Equal resource shares.
In the current system each project will get 50% of the CPU. The target behavior, which matches resource shares better, is that project B gets 100% of the CPU and project A gets 100% of the GPU.
Same host. Additional project C has only CPU apps.
In this case A's CPU LTD will stay around zero, and the CPU LTD for B and C goes unboundedly negative, and gets clamped at the cutoff. All information about the relative debt of B and C is lost.
The work fetch mechanism has two buffer sizes: min and max. The current work fetch policy requests work whenever the buffer goes below max. This can cause the client to issue frequent small work requests.
The original intention of min and max is that they are hysteresis limits: when the buffer goes below min, the client tries to fetch enough work to increase the buffer to max, and then issues no further requests until the buffer falls below min again. In this way, the interval between scheduler RPCs to a given project is at least (max-min).
The idea is to make resource share apply to overall credit, not to individual resources types. If two projects have the same resource share, they should have the same RAC. Scheduling decisions should give preference to projects whose share of RAC is less than their resource share.
There are problems with using project-granted credit as a basis for this approach:
- There may be a long and variable delay between completing a job and getting credit for it.
- Jobs may fail to get credit, e.g. because they don't validate.
Hence we will use a surrogate called estimated credit, maintained by the client. If projects grant credit fairly, and if all jobs validate, then estimated credit is roughly equal to granted credit over the long term.
Note: there is a potential advantage to using granted credit: doing so penalizes projects that grant inflated credit: the more credit a project grants, the less work a given host will do for it, assuming the host is attached to multiple projects. (The converse is potentially also true - a project would get more work done by granting less credit. This effect could be minimized by combining estimated credit with granted credit.)
BOINC server software grants credit on the basis of peak FLOPS, with a scaling factor applied to GPUs to normalize them relative to CPUs. The normalized peak FLOPS of a GPU can be estimated.
The estimated credit for a T-second segment of job execution is given by
T * ninstances(P) * peak_flops(P)
summed over the processor types used by the job.
The recent estimated credit REC(P) f a project P is maintained by the client, with an averaging half-life of, say, a month.
The scheduling priority of a project P is then
SP(P) = share(P) - REC(P)
where REC is normalized so that it sums to 1.
In scenario 1 above, SP(A) will always be negative and SP(B) will always be positive.
As the job scheduling policy picks jobs to run (e.g. on a multiprocessor) it needs to take into account the jobs already scheduled, so that it doesn't always schedule multiple jobs from the same project. To accomplish this, as each job is scheduled we increment a copy of REC(P) as if the job had run for one scheduling period.
The proposed work fetch policy:
- Work fetch for a given processor type is initiated whenever the saturated period is less than min.
- ask the fetchable project with greatest SP(P) for "shortfall" seconds of work.
- adjust SP(P) based on the amount of work currently queued
- whenever a scheduler RPC to project P is done (e.g. to report results) and SP(P) is greatest among fetchable projects for a given processor type, request "shortfall" seconds of that type
Notes:
- This will tend to get large (max-min) clumps of work for a single project, and variety will be lower than the current policy.
- What about "overworked" scenario: project A has very long job, runs high-priority for weeks. When it finishes, if other projects are temporarily down, don't want to immediately fetch more work from A.