-
Notifications
You must be signed in to change notification settings - Fork 452
BUDA implementation
On the server, there is a single BOINC app; let's call it 'buda'. This has app versions for the various platforms (Win, Mac, Linux) Each app version contains the Docker wrapper built for that platform.
Each science app variant is a collection of files:
User-supplied files:
- A Dockerfile
- a config file,
job.toml
- A main program or script
- Other files
Generated files:
-
variant.json
, which has info about the variant.
The set of science apps and variants is stored in a directory hierarchy of the form
project/buda_apps/
<sci_app_name>/
cpu/
... files
<plan_class>/
... files
...
...
This is similar to BOINC's hierarchy of apps and app versions, except:
- It's stored in a directory hierarchy, not in database tables
- Science app variants are not associated with platforms (since we're using Docker).
- It stores only the current version, not a sequence of versions (that's why we call them 'variants', not 'versions').
Conventional BOINC apps are 'polymorphic': if an app has both CPU and GPU variants, you submit jobs without specifying which one to use; the BOINC scheduler makes the decision.
It would be possible to make BUDA polymorphic, but this would be complex, requiring significant changes to the scheduler. So - for now - BUDA is not polymorphic.
When you submit jobs you have to specify which variant to use. This could be a slight nuisance: a variant could have little computing power, and you might avoid using it, but then you wouldn't get the power.
In the current BOINC architecture, each BOINC app has its own validator and assimilator. If multiple science apps "share" the same BOINC app, we'll need a way to let them have different validators and assimilators.
This could be built on the script-based framework; each science app could specify the names of validator and assimilator scripts, which would be stored in workunits.
BOINC provides a web interface for managing BUDA apps and submitting batches of jobs to them. Other interfaces are possible; e.g. we could make a Python-based remote API that could be used to integrate BUDA into other batch systems.
To support science apps with plan classes, BUDA will require changes to the scheduler.
Currently: given a job, the scheduler scans app versions, looking for one that the host can accept based on plan class. That won't work here.
Instead:
- add plan_class field to workunit (or put in xml_doc)
- if considering sending a WU to a host, and WU has a plan class
- skip if host can't handle the plan class
-
The scheduler would have to scan the
buda_apps
dir structure (or we could add this info to the DB). -
Jobs are tagged with BUDA science app name.
-
The scheduler scans versions of that science app.
-
If find a plan class the host can accept, build wu.xml_doc based on BUDA app version info.
The above is possible but would be a lot of work.