-
Notifications
You must be signed in to change notification settings - Fork 452
PlanClassFunc
If you need more generality than is provided by XML-based plan class specification, you can specify plan classes in C++.
The scheduler is linked with a function
bool app_plan(SCHEDULER_REQUEST &sreq, char* plan_class, HOST_USAGE&);
The sreq argument describes the host. It contains:
- in sreq.host field, a description of the host's hardware, including:
- In p_vendor and p_model, the processor type
- In p_features, the processor features (e.g., fpu tsc pae nx sse sse2 mmx)
- In m_nbytes, the amount of RAM
- in sreq.coprocs, a list of the hosts's coprocessors.
- in core_client_version, the client's version number in MMmmRR form.
When called with a particular SCHEDULER_REQUEST and plan class, the function returns true if the host's resources are sufficient for apps of that class. If true, it populates the HOST_USAGE structure:
struct HOST_USAGE {
double ncudas; // number of NVIDIA GPUs used
double natis; // number of ATI GPUs used
double gpu_ram; // max amount of GPU RAM used
double avg_ncpus; // avg #CPUs used by app (may be fractional)
double max_ncpus; // max #CPUs used (not currently used for anything)
double mem_usage; // optional; if populated, overrides workunig rsc_memory_bound.
double projected_flops;
// an estimate of the actual FLOPS.
// used to select versions, so make it higher for the preferred version
double peak_flops;
// the peak FLOPS of the devices to be used
char cmdline[256]; // passed to the app as a cmdline argument;
// this can be used, e.g. to control the # of threads used
};
You can define your own set of plan classes, and link your own app_plan() function with the scheduler. The BOINC scheduler comes with a default app_plan() (in sched/sched_customize.cpp).
Here's a plan class function for a multicore app that it achieves a linear speedup on up to 64 processors, and no additional speedup beyond that.
bool app_plan_mt(
SCHEDULER_REQUEST& sreq, HOST_USAGE& hu
) {
double ncpus = g_wreq->effective_ncpus;
// number of usable CPUs, taking user prefs into account
int nthreads = (int)ncpus;
if (nthreads > 64) nthreads = 64;
hu.avg_ncpus = nthreads;
hu.max_ncpus = nthreads;
sprintf(hu.cmdline, "--nthreads %d", nthreads);
hu.projected_flops = sreq.host.p_fpops*hu.avg_ncpus*.99;
// the .99 ensures that on uniprocessors a sequential app
// will be used in preferences to this
hu.peak_flops = sreq.host.p_fpops*hu.avg_ncpus;
return true;
}
To define a new NVIDIA/CUDA plan class, add a new clause to app_plan_cuda() in sched/sched_customize.cpp. For example, the plan class cuda23 is defined by:
...
if (!strcmp(plan_class, "cuda23")) {
if (!cuda_check(c, hu,
100, // minimum compute capability (1.0)
200, // max compute capability (2.0)
2030, // min CUDA version (2.3)
19500, // min display driver version (195.00)
384*MEGA, // min video RAM
1., // # of GPUs used (may be fractional, or an integer > 1)
.01, // fraction of FLOPS done by the CPU
.21 // estimated GPU efficiency (actual/peak FLOPS)
)) {
return false;
}
}
To define a new ATI/CAL plan class, add a new clause to app_plan_ati(). For example:
if (!strcmp(plan_class, "ati14")) {
if (!ati_check(c, hu,
1004000, // min display driver version (10.4)
false, // require libraries named "ati", not "amd"
384*MEGA, // min video RAM
1., // # of GPUs used (may be fractional, or an integer > 1)
.01, // fraction of FLOPS done by the CPU
.21 // estimated GPU efficiency (actual/peak FLOPS)
)) {
return false;
}
}
To define a new OpenCL plan class, add a new clause to app_plan_opencl(). For example:
if (!strcmp(plan_class, "opencl_nvidia_101")) {
return opencl_check(
c, hu,
101, // OpenCL version (1.1)
256*MEGA, // min video RAM
1, // # of GPUs used
.1, // fraction of FLOPS done by the CPU
.21 // estimated GPU efficiency (actual/peak FLOPS)
);
}