DiFfRG is a set of tools for the discretization of flow equations arising in the functional Renormalization Group (fRG). It supports the setup and calculation of large systems of flow equations allowing for complex combinations of vertex and derivative expansions.
For spatial discretizations, i.e. discretizations of field space mostly used for derivative expansions, DiFfRG makes different finite element (FE) methods available. These include:
- Continuous Galerkin FE
- Discontinuos Galerkin FE
- Direct discontinuous Galerkin FE
- Local discontinuous Galerkin FE (including derived finite volume (FV) schemes)
The FEM methods included in DiFfRG are built upon the deal.ii finite element library, which is highly parallelized and allows for great performance and flexibility. PDEs consisting of RG-time dependent equations, as well as stationary equations can be solved together during the flow, allowing for techniques like flowing fields in a very accessible way.
Both explicit and implicit timestepping methods are available and allow thus for efficient RG-time integration in the symmetric and symmetry-broken regime.
We also include a set of tools for the evaluation of integrals and discretization of momentum dependencies.
For an overview, please see the accompanying paper, the tutorial page in the documentation and the examples in Examples/
.
If you use DiFfRG in your scientific work, please cite the corresponding paper:
@article{Sattler:2024ozv,
author = "Sattler, Franz R. and Pawlowski, Jan M.",
title = "{DiFfRG: A Discretisation Framework for functional Renormalisation Group flows}",
eprint = "2412.13043",
archivePrefix = "arXiv",
primaryClass = "hep-ph",
month = "12",
year = "2024"
}
To compile and run this project, there are very few requirements which you can easily install using your package manager on Linux or MacOS:
- git for external requirements and to clone this repository.
- CMake for the build systems of DiFfRG, deal.ii and other libraries.
- GNU Make or another generator of your choice.
- A compiler supporting at least the C++20 standard. This project is only tested using the GCC compiler suite, as well as with
AppleClang
, but in principle, ICC or standard Clang should also work. - LAPACK and BLAS in some form, e.g. OpenBlas.
- The GNU Scientific Library GSL. If not found by DiFfRG, it will try to install it by itself.
- Doxygen and graphviz to build the documentation.
The following requirements are optional:
- Python is used in the library for visualization purposes. Furthermore, adaptive phase diagram calculation is implemented as a python routine.
- ParaView, a program to visualize and post-process the vtk data saved by DiFfRG when treating FEM discretizations.
- CUDA for integration routines on the GPU, which gives a huge speedup for the calculation of fully momentum dependent flow equations (10 - 100x). In case you wish to use CUDA, make sure you have a compiler available on your system compatible with your version of
nvcc
, e.g.g++
<=13.2 for CUDA 12.5
All other requirements are bundled and automatically built with DiFfRG. The framework has been tested with the following systems:
$ pacman -S git cmake gcc blas-openblas blas64-openblas paraview python doxygen graphviz gsl
In case you want to run with CUDA, as of January 2025 you have to have very specific versions of CUDA and gcc installed. Currently, the gcc13 compiler in the Arch package repository is incompatible with CUDA. To configure a system with a compatible CUDA+gcc configuration, them install directly from the Arch package archive
$ pacman -U https://archive.archlinux.org/packages/g/gcc12/gcc12-12.3.0-6-x86_64.pkg.tar.zst \
https://archive.archlinux.org/packages/g/gcc12-libs/gcc12-libs-12.3.0-6-x86_64.pkg.tar.zst \
https://archive.archlinux.org/packages/c/cuda/cuda-12.3.2-1-x86_64.pkg.tar.zst
$ dnf --enablerepo=devel install -y gcc-toolset-12 cmake git openblas-devel doxygen doxygen-latex python3 python3-pip gsl-devel
$ scl enable gcc-toolkit-12 bash
The second line is necessary to switch into a shell where g++-12
is available
$ apt-get update
$ apt-get install git cmake libopenblas-dev paraview build-essential python3 doxygen libeigen3-dev cuda graphviz libgsl-dev
First, install xcode and homebrew, then run
$ brew install cmake doxygen paraview eigen graphviz gsl
If using Windows, instead of running the project directly, it is recommended to use WSL and then go through the installation as if on Linux (e.g. Arch or Ubuntu).
Although a native install should be unproblematic in most cases, the setup with CUDA functionality may be daunting. Especially on high-performance clusters, and also depending on the packages available for chosen distribution, it may be much easier to work with the framework inside a container.
The specific choice of runtime environment is up to the user, however we provide a small build script to create docker container in which DiFfRG will be built.
To do this, you will need docker
, docker-buildx
and the NVIDIA container toolkit in case you wish to create a CUDA-compatible image.
For a CUDA-enabled build, run
$ bash setup_docker.sh -c 12.5.1 -j8
in the above, you may want to replace the version 12.5.1
with another version you can find on docker hub at nvidia/cuda .
Alternatively, for a CUDA-less build, run simply
$ bash setup_docker.sh -j8
If using other environments, e.g. ENROOT, the preferred approach is simply to build an image on top of the CUDA images by NVIDIA. Optimal compatibility is given using nvidia/cuda:12.5.1-devel-rockylinux
. Proceed with the installation setup for Rocky Linux above.
For example, with ENROOT a DiFfRG image can be built by following these steps:
$ enroot import docker://nvidia/cuda:12.5.1-devel-rockylinux9
$ enroot create --name DiFfRG nvidia+cuda+12.5.1-devel-rockylinux9.sqsh
$ enroot start --root --rw -m ./:/DiFfRG_source DiFfRG bash
Afterwards, one proceeds with the above Rocky Linux setup.
If all requirements are met, you can clone the git to a directory of your choice,
$ git clone https://github.com/satfra/DiFfRG.git
and start the build after switching to the git directory.
$ cd DiFfRG
$ bash -i build.sh -j8 -cf -i /opt/DiFfRG
The build_DiFfRG.sh
bash script will build and setup the DiFfRG project and all its requirements. This can take up to half an hour as the deal.ii library is quite large.
This script has the following options:
-f
Perform a full build and install of everything without confirmations.-c
Use CUDA when building the DiFfRG library.-i <directory>
Set the installation directory for the library.-j <threads>
Set the number of threads passed to make and git fetch.--help
Display this information.
Depending on your amount of CPU cores, you should adjust the -j
parameter which indicates the number of threads used in the build process. Note that choosing this too large may lead to extreme RAM usage, so tread carefully.
As soon as the build has finished, you can find a full install of the library in the DiFfRG_install
subdirectory.
If you have changes to the library code, you can update the library by running
$ bash -i update_DiFfRG.sh -clm -j8 -i /opt/DiFfRG
where once again the -j
parameter should be adjusted to your amount of CPU cores.
The update_DiFfRG.sh
script takes the following optional arguments:
-c
Use CUDA when building the DiFfRG library.-l
Build the DiFfRG library.-i <directory>
Set the installation directory for the library.-j <threads>
Set the number of threads passed to make and git fetch.-m
Install the Mathematica package locally.--help
Display this information.
For an overview, please see the tutorial page in the documentation. A local documentation is also always built automatically when running the setup script, but can also be built manually by running
$ make documentation
inside the DiFfRG_build
directory. You can find then a code reference in the top directory.
All backend code is contained in the DiFfRG directory.
Several simulations are defined in the Applications directory, which can be used as a starting point for your own simulations.
To see how fast the simulation progresses, one can set the verbosity
parameter either in the parameter file,
{
"output": {
"verbosity": 1
},
}
or from the CLI,
$ ./my_simulation -si /output/verbosity=1
Any DiFfRG simulation using the DiFfRG::ConfigurationHelper
class can be asked to give some syntax pertaining to the configuration:
$ ./my_simulation --help
This is a DiFfRG simulation. You can pass the following optional parameters:
--help shows this text
--generate-parameter-file generates a parameter file with some default values
-p specifiy a parameter file other than the standard parameter.json
-sd overwrite a double parameter. This should be in the format '-s physical/T=0.1'
-si overwrite an integer parameter. This should be in the format '-s physical/Nc=1'
-sb overwrite a boolean parameter. This should be in the format '-s physical/use_sth=true'
-ss overwrite a string parameter. This should be in the format '-s physical/a=hello'
In general, the IDA
timestepper from the SUNDIALS
-suite has proven to be the optimal choice for any fRG-flow with convexity restoration. Additionally, this solver allows for out-of-the-box solving of additional algebraic systems, which is handy for more complicated fRG setups.
However, a set of alternative steppers is also provided - ARKode
provides adaptive explicit, implicit and ImEx steppers, and furthermore explicit and implicit Euler, as well as TRBDF2 are seperately implemented. The use of the latter three is however discouraged, as the SUNDIALS
-timesteppers always give better performace.
If solving purely variable-dependent systems, one of the Boost
time steppers, Boost_RK45
, Boost_RK78
or Boost_ABM
. The latter is especially excellent for extremely large systems which have no extremely fast dynamics, but lacks adaptive timestepping. In practice, choosing Boost_ABM
over one of the RK steppers may speed up a Yang-Mills simulation with full momentum dependences by more than a factor of 10.
- The main backend for field-space discretization is deal.II, which provides the entire FEM-machinery as well as many other utility components.
- For performant and convenient calculation of Jacobian matrices we use the autodiff library, which implements automatic forward and backwards differentiation in C++ and also in CUDA.
- Time integration relies heavily on the SUNDIALS suite, specifically on the IDAs and ARKODE solvers.
- Rapidcsv for quick processing of .csv files.
- Catch2 for unit testing.
- RMM, a memory manager for CUDA, which is used for GPU-accelerated loop integrations.
- QMC for adaptive Quasi-Monte-Carlo integration.
- spdlog for logging.