Analysis for simulations produced with Model for Prediction Across Scales (MPAS) components and the Accelerated Climate Model for Energy (ACME), which used those components.
Analysis is stored in a directory corresponding to each core component, e.g.,
ocean
for MPAS-Ocean. Shared functionality is contained within the shared
directory.
http://mpas-analysis.readthedocs.io
This analysis repository presumes that the following python packages are available:
- numpy
- scipy
- matplotlib
- netCDF4
- xarray >= 0.10.0
- dask
- bottleneck
- basemap
- lxml
- nco >= 4.7.0
- pyproj
- pillow
You can easily install them via the conda command:
conda config --add channels conda-forge
conda install numpy scipy matplotlib netCDF4 xarray dask bottleneck basemap \
lxml nco pyproj pillow
To list the available analysis tasks, run:
./run_mpas_analysis --list
This lists all tasks and their tags. These can be used in the generate
command-line option or config option. See mpas_analysis/config.default
for more details.
-
Create and empty config file (say
config.myrun
) or copy one of the example files in theconfigs
directory. -
Copy and modify any config options you want to change from
mpas_analysis/config.default
into your new config file.Requirements for custom config files:
- At minimum you should set
baseDirectory
under[output]
to the folder where output is stored. NOTE this value should be a unique directory for each run being analyzed. If multiple runs are analyzed in the same directory, cached results from a previous analysis will not be updated correctly. - Any options you copy into the config file must include the appropriate section header (e.g. '[run]' or '[output]')
- The entire
mpas_analysis/config.default
does not need to be used. This file will automatically be used for any options you do not include in your custom config file. - Given the automatic sourcing of
mpas_analysis/config.default
you should not alter that file directly.
- At minimum you should set
-
run:
./run_mpas_analysis config.myrun
. This will read the configuraiton first frommpas_analysis/config.default
and then replace that configuraiton with any changes from fromconfig.myrun
-
If you want to run a subset of the analysis, you can either set the
generate
option under[output]
in your config file or use the--generate
flag on the command line. See the comments inmpas_analysis/config.default
for more details on this option.
- mpas-o files:
mpaso.hist.am.timeSeriesStatsMonthly.*.nc
(Note: since OHC anomalies are computed wrt the first year of the simulation, if OHC diagnostics is activated, the analysis will need the first full year ofmpaso.hist.am.timeSeriesStatsMonthly.*.nc
files, no matter what[timeSeries]/startYear
and[timeSeries]/endYear
are. This is especially important to know if short term archiving is used in the run to analyze: in that case, set[input]/runSubdirectory
,[input]/oceanHistorySubdirectory
and[input]/seaIceHistorySubdirectory
to the appropriate run and archive directories and choose[timeSeries]/startYear
and[timeSeries]/endYear
to include only data that have been short-term archived).mpaso.hist.am.meridionalHeatTransport.0001-03-01.nc
(or anyhist.am.meridionalHeatTransport
file)mpaso.rst.0002-01-01_00000.nc
(or any other mpas-o restart file)streams.ocean
mpas-o_in
- mpas-cice files:
mpascice.hist.am.timeSeriesStatsMonthly.*.nc
mpascice.rst.0002-01-01_00000.nc
(or any other mpas-cice restart file)streams.cice
mpas-cice_in
To purge old analysis (delete the whole output directory) before running run
the analysis, add the --purge
flag:
./run_mpas_analysis --purge <config.file>
The directory to delete is the baseDirectory
option in the output
section.
- Copy the appropriate job script file from
configs/<machine_name>
to the same directory asrun_mpas_analysis
(or another directory if preferred). The default script,configs/job_script.default.bash
, is appropriate for a laptop or desktop computer with multiple cores. - Modify the number of nodes (equal to the number of parallel tasks), the
run name and optionally the output directory and the path to the config
file for the run (default:
./configs/<machine_name>/config.<run_name>
) Note: injob_script.default.bash
, the number of parallel tasks is set manually, since there are no nodes. - Note: the number of parallel tasks can be anything between 1 and the number of analysis tasks to be performed. If there are more tasks than parallel tasks, later tasks will simply wait until earlier tasks have finished.
- Submit the job using the modified job script
If a job script for your machine is not available, try modifying the default
job script in configs/job_script.default.bash
or one of the job scripts for
another machine to fit your needs.
- create a new task by
copying mpas_analysis/analysis_task_template.py
to the appropriate folder (ocean
,sea_ice
, etc.) and modifying it as described in the template. Take a look atmpas_analysis/shared/analysis_task.py
for additional guidance. - note, no changes need to be made to
mpas_analysis/shared/analysis_task.py
- modify
mpas_analysis/config.default
(and possibly any machine-specific config files inconfigs/<machine>
) - import new analysis task in
mpas_analysis/<component>/__init__.py
- add new analysis task to
run_mpas_analysis
underbuild_analysis_list
:This will add a new object of theanalyses.append(<component>.MyTask(config, myArg='argValue'))
MyTask
class to a list of analysis tasks created inbuild_analysis_list
. Later on inrun_analysis
, it will first go through the list to make sure each task needs to be generated (by callingcheck_generate
, which is defined inAnalysisTask
), then, will callsetup_and_check
on each task (to make sure the appropriate AM is on and files are present), and will finally callrun
on each task that is to be generated and is set up properly.
To generate the sphinx
documentation, run:
conda install sphinx sphinx_rtd_theme numpydoc recommonmark
cd docs
make html