please visit https://github.com/wassermanlab/Variant_catalogue_pipeline for the pipeline and importer (publish) script
please visit https://github.com/wassermanlab/variome for the portal web application
This repo is intended for Wasserman lab members working on data processing for IBVL.
The repo is organized in sub-folders depending on the different aspects of the data processing.
Concerning metadata tracking (tracking ofinformation associated with each sample).
One of the considered tool to track metadata is OpenCGA, refer to the openCGA folder for more information.
Concerning the scripts used to generate the IBVL.
The Nextflow wrapper is used to allow treacability and reproducibility, to review / comment the scripts, refer to the script folder
How to run an import:
- copy the
import/.env-sample
file toimport/.env
and set values appropriately - (optional) if you need to, run
python tables.py
to create the tables (database should be empty before this) python orchestrate.py
will kick off the migration
The script creates a directory called "jobs", and a directory inside that called "1" the first time, "2" the second time, eg.
Each of these job folders has working data for the migration and two output logs (one for errors, one for progress). The working data is just (for each model) a file with the latest primary key, and a reverse lookup map for entity id (eg gene or variant or transcript id) to primary key.
PIPELINE_OUTPUT_PATH
- the full path to the directory containing pipeline output filesJOBS_PATH
- relative path (from execution path) to folder to contain jobs.COPY_MAPS_FROM_JOB
- The script maintains maps in order to resolve primary keys, they are persisted as json to the job directory, named using an incrementing number. If a job fails and you want to use the maps from a previous run, enter the run's job folder number as the value of this environment variableSCHEMA_NAME
- for an Oracle destination db, the schema name goes here.START_AT_MODEL
- to pick up after a previous migration run left off, you can enter the model name here, and the script will skip to that model (it runs in the order of keys as defined in themodel_import_actions
map)- (
START_AT_FILE
) - for convenience, you can also skip to a particular file in the first model dir imported, using natural sorting. Be very careful if using this in production as it will lead to false duplicates unless the primary key for new row insertions is corrected.