Skip to content

Commit

Permalink
Start working on the documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
Luthaf committed Apr 22, 2024
1 parent 86002a3 commit 5365cfe
Show file tree
Hide file tree
Showing 3 changed files with 211 additions and 76 deletions.
78 changes: 2 additions & 76 deletions src/metatensor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,82 +3,8 @@

## Building the code

1. You'll need to fist install libtorch, either by installing PyTorch itself
with Python, or by downloading the prebuilt C++ library from
https://pytorch.org/get-started/locally/.

```bash
# point this to the path where you extracted the C++ libtorch
TORCH_PREFIX=../../..
# if you used Python to install torch, you can do this:
TORCH_CMAKE_PREFIX=$(python -c "import torch; print(torch.utils.cmake_prefix_path)")
TORCH_PREFIX=$(cd "$TORCH_CMAKE_PREFIX/../.." && pwd)

TORCH_INCLUDES="-I$TORCH_PREFIX/include -I$TORCH_PREFIX/include/torch/csrc/api/include"
```

2. a) build and install metatensor-torch from source. You'll need a rust
compiler on your system, the easiest way is by using https://rustup.rs/

```bash
# patch a bug from torch's MKL detection
cd <PLUMED/DIR>
./src/metatensor/patch-torch.sh "$TORCH_PREFIX"

cd <SOME/PLACE/WHERE/TO/PUT/METATENSOR/SOURCES>

# define a location where metatensor should be installed
METATENSOR_PREFIX=<...>

METATENSOR_TORCH_PREFIX="$METATENSOR_PREFIX"

git clone https://github.com/lab-cosmo/metatensor --branch=metatensor-torch-v0.4.0
cd metatensor

mkdir build && cd build
cmake -DBUILD_SHARED_LIBS=ON \
-DCMAKE_INSTALL_PREFIX="$METATENSOR_PREFIX" \
-DCMAKE_PREFIX_PATH="$TORCH_PREFIX" \
-DBUILD_METATENSOR_TORCH=ON \
-DMETATENSOR_INSTALL_BOTH_STATIC_SHARED=OFF \
..

cmake --build . --target install --parallel
```

2. b) alternatively, use metatensor-torch from Python (`pip install metatensor[torch]`)

```bash
METATENSOR_CMAKE_PREFIX=$(python -c "import metatensor; print(metatensor.utils.cmake_prefix_path)")
METATENSOR_PREFIX=$(cd "$METATENSOR_CMAKE_PREFIX/../.." && pwd)

METATENSOR_TORCH_CMAKE_PREFIX=$(python -c "import metatensor.torch; print(metatensor.torch.utils.cmake_prefix_path)")
METATENSOR_TORCH_PREFIX=$(cd "$METATENSOR_TORCH_CMAKE_PREFIX/../.." && pwd)
```

3. build Plumed itself

```bash
cd <PLUMED/DIR>

# set the rpath to make sure plumed executable will be able to find the right libraries
RPATH="-Wl,-rpath,$TORCH_PREFIX/lib -Wl,-rpath,$METATENSOR_PREFIX/lib -Wl,-rpath,$METATENSOR_TORCH_PREFIX/lib"

# configure PLUMED with metatensor
./configure --enable-libtorch --enable-metatensor --enable-modules=+metatensor \
LDFLAGS="-L$TORCH_PREFIX/lib -L$METATENSOR_PREFIX/lib -L$METATENSOR_TORCH_PREFIX/lib $RPATH" \
CPPFLAGS="$TORCH_INCLUDES -I$METATENSOR_PREFIX/include -I$METATENSOR_TORCH_PREFIX/include"

# If you are on Linux and use a pip-installed version of libtorch, or the
# pre-cxx11-ABI build of libtorch, you'll need to add "-D_GLIBCXX_USE_CXX11_ABI=0"
# to the compilation flags:
./configure --enable-libtorch --enable-metatensor --enable-modules=+metatensor \
LDFLAGS="-L$TORCH_PREFIX/lib -L$METATENSOR_PREFIX/lib -L$METATENSOR_TORCH_PREFIX/lib $RPATH" \
CPPFLAGS="$TORCH_INCLUDES -I$METATENSOR_PREFIX/include -I$METATENSOR_TORCH_PREFIX/include" \
CXXFLAGS="-D_GLIBCXX_USE_CXX11_ABI=0"

make -j && make install
```
See [the main documentation](../../user-doc/METATENSORMOD.md) for more
information on how to compile and use this module.


<!-- TODO: explain vesin update process -->
52 changes: 52 additions & 0 deletions src/metatensor/metatensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,58 @@
#include "core/ActionRegister.h"
#include "core/PlumedMain.h"

//+PLUMEDOC METATENSORMOD_COLVAR METATENSOR
/*
Use arbitrary machine learning models as collective variables.
Note that this action requires the metatensor-torch library. Check the
instructions in the \ref METATENSORMOD page to enable this module.
This action enables the use of fully custom machine learning models — based on
the [metatensor atomistic models][mts_models] interface — as collective
variables in PLUMED. Such machine learning model are typically written and
customized using Python code, and then exported to run within PLUMED as
[TorchScript], which is a subset of Python that can be executed by the C++ torch
library.
Metatensor offers a way to define such models and pass data from PLUMED (or any
other simulation engine) to the model and back. For more information on how to
define such model, have a look at the [corresponding tutorials][mts_tutorials],
or at the code in `regtest/metatensor/`. Each of the Python scripts in this
directory defines a custom machine learning CV that can be used with PLUMED.
\par Examples
TODO
\par Collective variables and metatensor models
Collective variables are not yet part of the [known outputs][mts_outputs] for
metatensor models. Until the output format is standardized, this action expects
the following:
- the output name should be `"plumed::cv"`;
- the output should contain a single [block][mts_block];
- the output samples should be named `["system", "atom"]` for per-atom outputs;
or `["system"]` for global outputs. The `"system"` index should always be 0,
and the `"atom"` index should be the index of the atom (between 0 and the
total number of atoms);
- the output should not have any components;
- the output can have arbitrary properties;
- the output should not have any explicit gradients, all gradient calculations
are done using autograd.
*/ /*
[TorchScript]: https://pytorch.org/docs/stable/jit.html
[mts_models]: https://lab-cosmo.github.io/metatensor/latest/atomistic/index.html
[mts_tutorials]: https://lab-cosmo.github.io/metatensor/latest/examples/atomistic/index.html
[mts_outputs]: https://lab-cosmo.github.io/metatensor/latest/atomistic/outputs.html
[mts_block]: https://lab-cosmo.github.io/metatensor/latest/torch/reference/block.html
*/
//+ENDPLUMEDOC


#if !defined(__PLUMED_HAS_LIBTORCH) || !defined(__PLUMED_HAS_METATENSOR)

Expand Down
157 changes: 157 additions & 0 deletions user-doc/METATENSORMOD.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
\page METATENSORMOD Metatensor

<!--
description: Using arbitrary machine learning models as collective variables
authors: Guillaume Fraux
reference:
-->

# Overview

This module implements the interface between PLUMED and [metatensor], allowing
to use arbitrary machine learning models as collective variables. These machine
learning models are defined using custom Python code — following the [metatensor
atomistic models][mts_models] interface — and then exported to [TorchScript].
The exported model is then loaded inside PLUMED and executed during the
simulation.


# Installation

This module requires two main dependencies: the C++ torch library (i.e.
`libtorch`); and the C++ metatensor_torch library. There are multiple ways of
installing both libraries, which are discussed below.

## Installing the libraries through Python's package manager (`pip`)

The easiest way to get all dependencies on your system is to download the
pre-built Python wheels with `pip`. This is the same set of wheels you will need
to define custom models.

```bash
pip install "metatensor-torch ==0.4.0" # change this version to get newer releases

# optional: get the other metatensor tools to define models (these are only usable from Python).
pip install metatensor-operations metatensor-learn

# export the location to all the libraries:
TORCH_CMAKE_PREFIX=$(python -c "import torch; print(torch.utils.cmake_prefix_path)")
TORCH_PREFIX=$(cd "$TORCH_CMAKE_PREFIX/../.." && pwd)

METATENSOR_CMAKE_PREFIX=$(python -c "import metatensor; print(metatensor.utils.cmake_prefix_path)")
METATENSOR_PREFIX=$(cd "$METATENSOR_CMAKE_PREFIX/../.." && pwd)

METATENSOR_TORCH_CMAKE_PREFIX=$(python -c "import metatensor.torch; print(metatensor.torch.utils.cmake_prefix_path)")
METATENSOR_TORCH_PREFIX=$(cd "$METATENSOR_TORCH_CMAKE_PREFIX/../.." && pwd)

# The torch library installed by pip uses a pre-cxx11 ABI
TORCH_CPPFLAGS="-D_GLIBCXX_USE_CXX11_ABI=0"
```

That's it, you can now jump to [the last part](#building-plumed-with-metatensor)
of the installation instructions.

## Using pre-built libraries

If you only want to use existing models, you can download pre-built versions of
the libraries and build PLUMED against these. First, you'll need to download
libtorch (see also \ref installation-libtorch for other instructions on
installing a pre-built libtorch):

```bash
# Download torch 2.2.2 for x86_64 (Intel) Linux.
#
# Variations of this link for other operating systems (macOS, Windows), CPU
# architecture (Apple Silicon, arm64), CUDA versions, and newer versions of
# libtorch can be found at https://pytorch.org/get-started/locally/

wget https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.2.2%2Bcpu.zip
unzip libtorch-cxx11-abi-shared-with-deps-2.2.2+cpu.zip

# alternatively if you have a CUDA-enabled GPU, you can use the corresponding
# pre-built library (here for CUDA 12.1):
wget https://download.pytorch.org/libtorch/cu121/libtorch-cxx11-abi-shared-with-deps-2.2.2%2Bcu121.zip
unzip libtorch-cxx11-abi-shared-with-deps-2.2.2+cu121.zip

# Make the location of libtorch visible
TORCH_PREFIX=$(pwd)/libtorch

# if you are using a library with pre-cxx11 ABI, you need an extra flag:
TORCH_CPPFLAGS="-D_GLIBCXX_USE_CXX11_ABI=0"
```

Once you acquire libtorch, it is time to build metatensor and metatensor_torch
from sources. There is currently no standalone pre-built library for these
(although you can use the pre-built version that comes with `pip`). For this,
you'll need a rust compiler on your system, which you can get with
[rustup](https://rustup.rs/) or any other method at your convenience.

```bash
# patch a bug from torch's MKL detection in CMake
cd <PLUMED/DIR>
./src/metatensor/patch-torch.sh "$TORCH_PREFIX"

cd <SOME/PLACE/WHERE/TO/PUT/METATENSOR/SOURCES>

# define a location where metatensor should be installed
METATENSOR_PREFIX=<...>

METATENSOR_TORCH_PREFIX="$METATENSOR_PREFIX"

git clone https://github.com/lab-cosmo/metatensor
# or a more recent release of metatensor-torch
git checkout metatensor-torch-v0.4.0
cd metatensor

mkdir build && cd build
cmake -DBUILD_SHARED_LIBS=ON \
-DCMAKE_INSTALL_PREFIX="$METATENSOR_PREFIX" \
-DCMAKE_PREFIX_PATH="$TORCH_PREFIX" \
-DBUILD_METATENSOR_TORCH=ON \
-DMETATENSOR_INSTALL_BOTH_STATIC_SHARED=OFF \
..

cmake --build . --target install --parallel
```

## Building plumed with metatensor

Once you installed all dependencies with one of the methods above, you can now
configure PLUMED:

```bash
# set include search path for the compilers
TORCH_INCLUDES="-I$TORCH_PREFIX/include -I$TORCH_PREFIX/include/torch/csrc/api/include"
CPPFLAGS="$TORCH_INCLUDES $TORCH_CPPFLAGS -I$METATENSOR_PREFIX/include -I$METATENSOR_TORCH_PREFIX/include $CPPFLAGS"

# set library search path for the linker
LDFLAGS="-L$TORCH_PREFIX/lib -L$METATENSOR_PREFIX/lib -L$METATENSOR_TORCH_PREFIX/lib $LDFLAGS"

# set the rpath to make sure plumed executable will be able to find the right libraries
LDFLAGS="$LDFLAGS -Wl,-rpath,$TORCH_PREFIX/lib"
LDFLAGS="$LDFLAGS -Wl,-rpath,$METATENSOR_PREFIX/lib -Wl,-rpath,$METATENSOR_TORCH_PREFIX/lib"

# configure PLUMED
./configure --enable-libtorch --enable-metatensor --enable-modules=+metatensor \
LDFLAGS="$LDFLAGS" CPPFLAGS="$CPPFLAGS"
```

Pay close attention to the output, it should contain **both** a line about
`checking libtorch` and a line about `checking metatensor`, both ending with
`...yes`. If this is not the case, you'll get a warning about `cannot enable
__PLUMED_HAS_LIBTORCH` or `cannot enable __PLUMED_HAS_METATENSOR`. If you get
any of these warnings, you should check `config.log` to know more about what's
going on and why these libraries can't be found.

# Module Contents

This module defines the following actions:

@METATENSORMOD_COLVAR@




[TorchScript]: https://pytorch.org/docs/stable/jit.html
[metatensor]: https://lab-cosmo.github.io/metatensor/latest/index.html
[mts_models]: https://lab-cosmo.github.io/metatensor/latest/atomistic/index.html

0 comments on commit 5365cfe

Please sign in to comment.