Skip to content

Commit

Permalink
merge master
Browse files Browse the repository at this point in the history
  • Loading branch information
aulemahal committed Dec 15, 2023
2 parents be25751 + 85a2f74 commit a8e8a89
Show file tree
Hide file tree
Showing 16 changed files with 322 additions and 49 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ on:
- submitted

env:
XCLIM_TESTDATA_BRANCH: v2023.9.12
XCLIM_TESTDATA_BRANCH: v2023.12.14

concurrency:
# For a given workflow, if we push to the same branch, cancel all previous builds on that branch except on master.
Expand Down
8 changes: 5 additions & 3 deletions CHANGES.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,14 @@
Changelog
=========

v0.48.0 (unreleased)
--------------------
Contributors to this version: Pascal Bourgault (:user:`aulemahal`).
v0.48 (unreleased)
------------------
Contributors to this version: Juliette Lavoie (:user:`juliettelavoie`), Pascal Bourgault (:user:`aulemahal`), Trevor James Smith (:user:`Zeitsperre`), David Huard (:user:`huard`), Éric Dupuis (:user:`coxipi`).

New features and enhancements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Added uncertainty partitioning method `lafferty_sriver` from Lafferty and Sriver (2023), which can partition uncertainty related to the downscaling method. (:issue:`1497`, :pull:`1529`).
* Validate YAML indicators description before trying to build module. (:issue:`1523`, :pull:`1560`).
* New ``xclim.core.calendar.stack_periods`` and ``unstack_periods`` for performing ``rolling(time=...).construct(..., stride=...)`` but with non-uniform temporal periods like years or months. They replace ``xclim.sdba.processing.construct_moving_yearly_window`` and ``unpack_moving_yearly_window`` which are deprecated and will be removed in a future release.

v0.47.0 (2023-12-01)
Expand Down
3 changes: 3 additions & 0 deletions docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,9 @@ Ensembles Module
.. autofunction:: xclim.ensembles.hawkins_sutton
:noindex:

.. autofunction:: xclim.ensembles.lafferty_sriver
:noindex:

Units Handling Submodule
========================

Expand Down
23 changes: 8 additions & 15 deletions docs/notebooks/extendxclim.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -397,26 +397,15 @@
"\n",
"\n",
"#### Validation of the YAML file\n",
"Using [yamale](https://github.com/23andMe/Yamale), it is possible to check if the YAML file is valid. `xclim` ships with a schema (in `xclim/data/schema.yml`) file. The file can be located with:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from importlib.resources import path\n",
"\n",
"with path(\"xclim.data\", \"schema.yml\") as f:\n",
" print(f)"
"Using [yamale](https://github.com/23andMe/Yamale), it is possible to check if the YAML file is valid. `xclim` ships with a schema (in `xclim/data/schema.yml`) file. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the validation can be executed either in a python session:"
"The validation can be executed in a python session:"
]
},
{
Expand All @@ -425,6 +414,8 @@
"metadata": {},
"outputs": [],
"source": [
"from importlib.resources import path\n",
"\n",
"import yamale\n",
"\n",
"with path(\"xclim.data\", \"schema.yml\") as f:\n",
Expand All @@ -437,13 +428,15 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"No errors means it passed. The validation can also be run through the command line with:\n",
"Or the validation can alternatively be run from the command line with:\n",
"\n",
"```bash\n",
"yamale -s path/to/schema.yml path/to/module.yml\n",
"```\n",
"\n",
"#### Loading the module and computating of the indices."
"Note that xclim builds indicators from a yaml file, as shown in the next example, it validates it first. \n",
"\n",
"#### Loading the module and computing indicators."
]
},
{
Expand Down
14 changes: 11 additions & 3 deletions docs/notebooks/partitioning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,11 @@
"source": [
"## Create an ensemble \n",
"\n",
"Here we combine the different models and scenarios into a single DataArray with dimensions `model` and `scenario`. Note that the names of those dimensions are important for the uncertainty partitioning algorithm to work. "
"Here we combine the different models and scenarios into a single DataArray with dimensions `model` and `scenario`. Note that the names of those dimensions are important for the uncertainty partitioning algorithm to work. \n",
"\n",
"<div class=\"alert alert-info\">\n",
"Note that the [xscen library](https://xscen.readthedocs.io/en/latest/index.html) provides a helper function `xscen.ensembles.get_partition_input` to build partition ensembles.\n",
"</div>"
]
},
{
Expand Down Expand Up @@ -137,7 +141,11 @@
"id": "41af418d-9e92-433c-800c-6ba28ff7684c",
"metadata": {},
"source": [
"From there, it's relatively straightforward to compute the relative strength of uncertainties, and create graphics similar to those found in scientific papers. "
"From there, it's relatively straightforward to compute the relative strength of uncertainties, and create graphics similar to those found in scientific papers. \n",
"\n",
"<div class=\"alert alert-info\">\n",
"Note that the [figanos library](https://figanos.readthedocs.io/en/latest/) provides a function `fg.partition` to plot the graph below.\n",
"</div>"
]
},
{
Expand Down Expand Up @@ -238,7 +246,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.8"
"version": "3.9.13"
}
},
"nbformat": 4,
Expand Down
17 changes: 17 additions & 0 deletions docs/references.bib
Original file line number Diff line number Diff line change
Expand Up @@ -2086,3 +2086,20 @@ @inbook{
year={2023},
pages={1927–2058}
}

@article{Lafferty2023,
abstract = {Efforts to diagnose the risks of a changing climate often rely on downscaled and bias-corrected climate information, making it important to understand the uncertainties and potential biases of this approach. Here, we perform a variance decomposition to partition uncertainty in global climate projections and quantify the relative importance of downscaling and bias-correction. We analyze simple climate metrics such as annual temperature and precipitation averages, as well as several indices of climate extremes. We find that downscaling and bias-correction often contribute substantial uncertainty to local decision-relevant climate outcomes, though our results are strongly heterogeneous across space, time, and climate metrics. Our results can provide guidance to impact modelers and decision-makers regarding the uncertainties associated with downscaling and bias-correction when performing local-scale analyses, as neglecting to account for these uncertainties may risk overconfidence relative to the full range of possible climate futures.},
author = {David C. Lafferty and Ryan L. Sriver},
doi = {10.1038/s41612-023-00486-0},
issn = {2397-3722},
issue = {1},
journal = {npj Climate and Atmospheric Science 2023 6:1},
keywords = {Atmospheric science,Climate,Climate and Earth system modelling,Projection and prediction,change impacts},
month = {9},
pages = {1-13},
publisher = {Nature Publishing Group},
title = {Downscaling and bias-correction contribute considerable uncertainty to local climate projections in CMIP6},
volume = {6},
url = {https://www.nature.com/articles/s41612-023-00486-0},
year = {2023},
}
2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.47.0
current_version = 0.47.2-beta
commit = True
tag = False
parse = (?P<major>\d+)\.(?P<minor>\d+).(?P<patch>\d+)(\-(?P<release>[a-z]+))?
Expand Down
25 changes: 25 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
from xclim.testing import helpers
from xclim.testing.helpers import test_timeseries
from xclim.testing.utils import _default_cache_dir # noqa
from xclim.testing.utils import get_file
from xclim.testing.utils import open_dataset as _open_dataset

if not __xclim_version__.endswith("-beta") and helpers.TESTDATA_BRANCH == "main":
Expand Down Expand Up @@ -429,6 +430,30 @@ def ensemble_dataset_objects() -> dict:
return edo


@pytest.fixture(scope="session")
def lafferty_sriver_ds() -> xr.Dataset:
"""Get data from Lafferty & Sriver unit test.
Notes
-----
https://github.com/david0811/lafferty-sriver_2023_npjCliAtm/tree/main/unit_test
"""
fn = get_file(
"uncertainty_partitioning/seattle_avg_tas.csv",
cache_dir=_default_cache_dir,
branch=helpers.TESTDATA_BRANCH,
)

df = pd.read_csv(fn, parse_dates=["time"]).rename(
columns={"ssp": "scenario", "ensemble": "downscaling"}
)

# Make xarray dataset
return xr.Dataset.from_dataframe(
df.set_index(["scenario", "model", "downscaling", "time"])
)


@pytest.fixture(scope="session", autouse=True)
def gather_session_data(threadsafe_data_dir, worker_id, xdoctest_namespace):
"""Gather testing data on pytest run.
Expand Down
6 changes: 0 additions & 6 deletions tests/test_modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,12 +61,6 @@ def test_custom_indices(open_dataset):
# Use the example data used in the Extending Xclim notebook for testing.
example_path = Path(__file__).parent.parent / "docs" / "notebooks" / "example"

schema = yamale.make_schema(
Path(__file__).parent.parent / "xclim" / "data" / "schema.yml"
)
data = yamale.make_data(example_path / "example.yml")
yamale.validate(schema, data)

pr = open_dataset("ERA5/daily_surface_cancities_1990-1993.nc").pr

# This tests load_module with a python file that is _not_ on the PATH
Expand Down
91 changes: 90 additions & 1 deletion tests/test_partitioning.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
import numpy as np
import xarray as xr

from xclim.ensembles import hawkins_sutton
from xclim.ensembles import fractional_uncertainty, hawkins_sutton, lafferty_sriver
from xclim.ensembles._filters import _concat_hist, _model_in_all_scens, _single_member


Expand Down Expand Up @@ -67,3 +67,92 @@ def test_hawkins_sutton_synthetic(random):
su.sel(time=slice("2020", None)).mean()
> su.sel(time=slice("2000", "2010")).mean()
)


def test_lafferty_sriver_synthetic(random):
"""Test logic of Lafferty & Sriver's implementation using synthetic data."""
# Time, scenario, model, downscaling
# Here the scenarios don't change over time, so there should be no model variability (since it's relative to the
# reference period.
sm = np.arange(10, 41, 10) # Scenario mean (4)
mm = np.arange(-6, 7, 1) # Model mean (13)
dm = np.arange(-2, 3, 1) # Downscaling mean (5)
mean = (
dm[np.newaxis, np.newaxis, :]
+ mm[np.newaxis, :, np.newaxis]
+ sm[:, np.newaxis, np.newaxis]
)

# Natural variability
r = random.standard_normal((4, 13, 5, 60))

x = r + mean[:, :, :, np.newaxis]
time = xr.date_range("1970-01-01", periods=60, freq="Y")
da = xr.DataArray(
x, dims=("scenario", "model", "downscaling", "time"), coords={"time": time}
)
m, v = lafferty_sriver(da)
# Mean uncertainty over time
vm = v.mean(dim="time")

# Check that the mean uncertainty
np.testing.assert_array_almost_equal(m.mean(dim="time"), 25, decimal=1)

# Check that model uncertainty > variability
assert vm.sel(uncertainty="model") > vm.sel(uncertainty="variability")

# Smoke test with polynomial of order 2
fit = da.polyfit(dim="time", deg=2, skipna=True)
sm = xr.polyval(coord=da.time, coeffs=fit.polyfit_coefficients).where(da.notnull())
lafferty_sriver(da, sm=sm)


def test_lafferty_sriver(lafferty_sriver_ds):
g, u = lafferty_sriver(lafferty_sriver_ds.tas)

fu = fractional_uncertainty(u)

# Assertions based on expected results from
# https://github.com/david0811/lafferty-sriver_2023_npjCliAtm/blob/main/unit_test/unit_test_check.ipynb
assert fu.sel(time="2020", uncertainty="downscaling") > fu.sel(
time="2020", uncertainty="model"
)
assert fu.sel(time="2020", uncertainty="variability") > fu.sel(
time="2020", uncertainty="scenario"
)
assert (
fu.sel(time="2090", uncertainty="scenario").data
> fu.sel(time="2020", uncertainty="scenario").data
)
assert (
fu.sel(time="2090", uncertainty="downscaling").data
< fu.sel(time="2020", uncertainty="downscaling").data
)

def graph():
"""Return graphic like in https://github.com/david0811/lafferty-sriver_2023_npjCliAtm/blob/main/unit_test/unit_test_check.ipynb"""
from matplotlib import pyplot as plt

udict = {
"Scenario": fu.sel(uncertainty="scenario").to_numpy().flatten(),
"Model": fu.sel(uncertainty="model").to_numpy().flatten(),
"Downscaling": fu.sel(uncertainty="downscaling").to_numpy().flatten(),
"Variability": fu.sel(uncertainty="variability").to_numpy().flatten(),
}

fig, ax = plt.subplots()
ax.stackplot(
np.arange(2015, 2101),
udict.values(),
labels=udict.keys(),
alpha=1,
colors=["#00CC89", "#6869B3", "#CC883C", "#FFFF99"],
edgecolor="white",
lw=1.5,
)
ax.set_xlim([2020, 2095])
ax.set_ylim([0, 100])
ax.legend(loc="upper left")
plt.show()

# graph()
2 changes: 1 addition & 1 deletion xclim/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@

__author__ = """Travis Logan"""
__email__ = "[email protected]"
__version__ = "0.47.0"
__version__ = "0.47.2-beta"


_module_data = _files("xclim.data")
Expand Down
13 changes: 11 additions & 2 deletions xclim/core/indicator.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,8 @@
In the following, the section under `<identifier>` is referred to as `data`. When creating indicators from
a dictionary, with :py:meth:`Indicator.from_dict`, the input dict must follow the same structure of `data`.
The resulting yaml file can be validated using the provided schema (in xclim/data/schema.yml)
and the YAMALE tool :cite:p:`lopker_yamale_2022`. See the "Extending xclim" notebook for more info.
When a module is built from a yaml file, the yaml is first validated against the schema (see xclim/data/schema.yml)
using the YAMALE library (:cite:p:`lopker_yamale_2022`). See the "Extending xclim" notebook for more info.
Inputs
~~~~~~
Expand Down Expand Up @@ -115,6 +115,7 @@

import numpy as np
import xarray
import yamale
from xarray import DataArray, Dataset
from yaml import safe_load

Expand Down Expand Up @@ -1716,6 +1717,14 @@ def build_indicator_module_from_yaml( # noqa: C901
with ymlpath.open(encoding=encoding) as f:
yml = safe_load(f)

# Read schema
schema = yamale.make_schema(Path(__file__).parent.parent / "data" / "schema.yml")

# Validate - a YamaleError will be raised if the module does not comply with the schema.
yamale.validate(
schema, yamale.make_data(content=ymlpath.read_text(encoding=encoding))
)

# Load values from top-level in yml.
# Priority of arguments differ.
module_name = name or yml.get("module", filepath.stem)
Expand Down
2 changes: 1 addition & 1 deletion xclim/ensembles/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
from __future__ import annotations

from ._base import create_ensemble, ensemble_mean_std_max_min, ensemble_percentiles
from ._partitioning import hawkins_sutton
from ._partitioning import fractional_uncertainty, hawkins_sutton, lafferty_sriver
from ._reduce import (
kkz_reduce_ensemble,
kmeans_reduce_ensemble,
Expand Down
Loading

0 comments on commit a8e8a89

Please sign in to comment.