Traditional finite difference schemes are absolutely crucial for scientific computing. If you like uncertainty quantification, transparent algorithm assumptions, and next-level flexibility in your function evaluations, you need probabilistic numerical finite differences.
Because when using traditional finite difference coefficients, one implicitly assumes that the function to-be-differentiated is a polynomial and evaluated on an equidistant grid. Is your function really a polynomial? Are you satisfied with evaluating your function on equidistant grids? If not, read on.
Traditional, non-probabilistic finite difference schemes fit a polynomial to function evaluations and differentiate the polynomial to compute finite difference weights (Fornberg, 1988). But why use a polynomial? If we use a Gaussian process, we get uncertainty quantification, scattered point sets, transparent modelling assumptions, and many more benefits for free!
Probabilistic numerical finite difference schemes can be applied to function evaluations as follows.
>>> import jax
>>> import jax.numpy as jnp
>>> from jax.config import config
>>>
>>> import probfindiff
>>>
>>> config.update("jax_platform_name", "cpu")
>>>
>>> scheme, xs = jax.jit(probfindiff.central)(dx=0.2)
>>> f = lambda x: (x-1.)**2.
>>> dfx, cov = jax.jit(probfindiff.differentiate)(f(xs), scheme=scheme)
>>> print(jnp.round(dfx, 1))
-2.0
>>> print(jnp.round(jnp.log10(cov), 0))
-7.0
>>> print(isinstance(scheme, tuple))
True
>>> print(jnp.round(scheme.weights, 1))
[-2.5 0. 2.5]
See this page for more examples.
pip install probfindiff
This assumes that JAX is already installed. If not, use
pip install probfindiff[cpu]
to combine probfindiff
with jax[cpu]
.
With all dev-related setups:
pip install probfindiff[ci]
- Forward, backward, central probabilistic numerical finite differences
- Finite differences on arbitrarily scattered point sets and in high dimensions
- Finite difference schemes for observation noise
- Symmetric and unsymmetric collocation ("Kansa's method")
- Polynomial kernels, exponentiated quadratic kernels, and an API for custom kernels
- Partial derivatives, Laplacians, divergences, compositions of differential operators, and an API for custom differential operators
- Tutorials that explain how to use all of the above
- Compilation, vectorisation, automatic differentiation, and everything else that JAX provides.
Check the tutorials on this page for examples.
The technical background is explained in the paper:
@article{kramer2022probabilistic,
title={Probabilistic Numerical Method of Lines for Time-Dependent Partial Differential Equations},
author={Kr{\"a}mer, Nicholas and Schmidt, Jonathan and Hennig, Philipp},
journal={AISTATS},
year={2022}
}
Please consider citing it if you use this repository for your research.
Finite difference schemes are not new, obviously.
- FinDiff (https://findiff.readthedocs.io/en/latest/)
- pystencils (https://pycodegen.pages.i10git.cs.fau.de/pystencils/index.html#)
- FDM (https://github.com/wesselb/fdm)
- RBF(.fd) (https://rbf.readthedocs.io/en/latest/fd.html)
- FiniteDifferences.jl (https://github.com/JuliaDiff/FiniteDifferences.jl)
- FinDiff.jl (https://github.com/JuliaDiff/FiniteDiff.jl)
probfindiff
does many things similarly to the above, but differs in more than one points:
- We do probabilistic numerical finite differences. Essentially, this encompasses a Gaussian process perspective on radial-basis-function-generated finite differences (provided by "RBF" above). As such, different to all of the above, we treat uncertainty quantification and modelling as first-class-citizens.
probfindiff
uses JAX, which brings with it automatic differentiation, JIT-compilation, GPUs, and more.probfindiff
does not evaluate functions. Most of the packages above have an APIdifferentiate(fn, x, scheme)
whereas we usedifferentiate(fn(x), scheme.weights)
. We choose the latter because implementations simplify (we pass around arrays instead of callables), gain efficiency (it becomes obvious which quantities to reuse in multiple applications), and because users know best how to evaluate their functions (for example, whether the function is vectorised).