Skip to content

C2SM/clim-sanity-checker

Repository files navigation

Sanity Checker for climate model

This tool performs a statistical check of the output of a new climate simulation compared to a set of reference simulations in order to identify systematic bias in the climatological results (due for example to a bug, or changes/problems in the environmental setup).

This tool is based on a Excel tool David Neubauer (ETHZ) developed for ECHAM-HAMMOZ : https://redmine.hammoz.ethz.ch/projects/hammoz/wiki/Reference_experiments_to_test_computing_platform

It allows the analysis of annual global means of a 10 year period (any other period of time is possible as well) for ECHAM-HAMMOZ and ICON. In general, data from other models can be processed as well.

Currently there are 4 different tests available:

  • Welch's t-Test (welch)
  • Field Correlation Test (fldcor)
  • RMSE Test (rmse)
  • Emissions Test (emi)

For more details about the implementation of each test see test descriptions.

Structure

This tool consists of three modules that can be run independently and a wrapper to execute them one after each other:

Each module writes intermediate files to a directory passed with argument --p_stages, the subsequent module then looks into that directory for files needed. The tool is written to avoid performing time-consuming processing steps more than once.

Detailed information about the structure of the clim-sanity-checker can be found in structure.

Quick start

Prepare Environment

The clim-sanity-checker needs a CDO executable as well as several non-standard Python-packages. The Python-dependencies can be found in requirements.txt.

Piz Daint

module load daint-gpu
module load cray-python
module load CDO
python -m venv path_to_your_env
source path_to_your_env/bin/activate
pip install -r requirements.txt

Euler

module load python
module load climate
python -m venv path_to_your_env
source path_to_your_env/bin/activate
pip install -r requirements.txt

Run

Run testsuite:
There is a small testsuite to check if every test works as expected located in testsuite. To run, simply type:

pytest testsuite/*
or
pytest -s testsuite/* (for more verbosity)

It is recommended to run the testsuite before any use of the clim-sanity-checker!

Configure paths with paths_init.py:
python paths_init.py -pr /project/s903/nedavid/plattform_comparison/

This will create the file paths.py in the folder lib for later use. Note that any path defined in paths.py can be overridden by command-line arguments.

Run sanity_test.py:

Important:
Remove all restart files from the directory that contains the raw model output. It can corrupt your statistics.

The clim-sanity-checker needs disk space in the order of 1 GB. It is therefore recommended to run this tool on scratch or to pass a working directory with sufficient disk space through the argument --wrkdir.

python sanity_test.py -exp your_experiment_name --f_vars_to_extract vars_echam-hammoz.csv --raw_f_subfold Raw

This command will:

  • Launch the preprocessing for the model output (stored in the folder Raw) based on the vars_echam-hammoz.csv variable definitions.
  • Perform all tests.
  • Print the results of each test to the terminal.
  • Ask you whether this test should be added to the reference pool or not.

All testresults (test_postproc_yourtest_yourexperiment.csv) and plots (Welch's t-test only) generated by the clim-sanity-checker are stored in the stages-directory. The .csv files contain the measure relevant for the specific test for each variable. In addition a logfile with the same output as printed to the terminal is stored in logs.

Detailed information about all available options of sanity_test.py or other modules is provided in module arguments.

Variable Definition

This tool can analyze all possible variables or combinations of individual fields. One needs to define the variable name, the formula to derive it from model output and the file-extension (e.g. atm_2d) for each test in variables_to_process. An example for ECHAM-HAMMOZ is vars_echam-hammoz.csv.

For ECHAM-HAMMOZ and ICON there already exists such a table for some of the tests:

ECHAM-HAMMOZ

The variable definitions for the Welch's t-Test are derived from the publication of Neubauer et al.: The global aerosol–climate model ECHAM6.3–HAM2.3 – Part 2: Cloud evaluation, aerosol radiative forcing, and climate sensitivity, Geosci. Model Dev., 12, 3609–3639, https://doi.org/10.5194/gmd-12-3609-2019, 2019.
For the Emission Test the variable definitions are taken from an Excel-sheet provided by David Neubauer.

ICON

The variable definitions for the Welch's t-Test are adapted from the file below provided by Colin Tully:
vars_icon

Reference Database

A key component of this tool are the references (experiments that have correct results). By running the same experiment (identical namelists, input-data, emission-scenarios, etc.) but with different compilers or on different machines one can cross-compare and verify these installations. It is recommended to use at least a 10-year period as experiment duration, for shorter periods the tests could fail due to too large interannual variability even in case of correct results. For some variables even 10-years will be too short to average the interannual variability below the defined thresholds. Therefore it is recommended to have several reference experiments e.g. from different computing platforms or different compilers or compiler settings, to which a new experiment can then be compared to. If one configuration, i.e. GCC on Piz Daint, does not give results within the statistical tolerance of each test, there is very likely a bug or a problem. For more information about the references see reference_database.