Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unclear if conda user package works with MPI #390

Open
matilde-t opened this issue Nov 1, 2024 · 4 comments
Open

Unclear if conda user package works with MPI #390

matilde-t opened this issue Nov 1, 2024 · 4 comments

Comments

@matilde-t
Copy link

I installed pyamrex from Conda and I think it doesn't use MPI, even if it's installed on my system. The documentation only mentions that pyamrex conda package does not yet provide GPU support, but doesn't say anything about MPI. In the developer installation it looks like MPI is enabled by default, so I don't understand this behaviour.

If I run tests/test_particleContainer.py, I obtain

SKIPPED [1] tests/test_particleContainer.py:477: Requires AMReX_MPI=ON
SKIPPED [1] tests/test_particleContainer.py:542: Requires AMReX_MPI=ON

even if I see in my Config

class Config:
    amrex_version: typing.ClassVar[str] = "24.10"
    gpu_backend = None
    have_eb: typing.ClassVar[bool] = True
    have_gpu: typing.ClassVar[bool] = False
    have_mpi: typing.ClassVar[bool] = True
    have_omp: typing.ClassVar[bool] = False
    spacedim: typing.ClassVar[int] = 3
    verbose: typing.ClassVar[int] = 1

and MPI is indeed installed on my system.

If I remove the skip statement, I see that something is running, but it talks about OMP threads, that should be disabled.

This is the output after a slight tweak of the test:

tests/test_particleContainer.py Initializing AMReX (24.10)...
OMP initialized with 4 OMP threads
AMReX (24.10) initialized
ParticleContainer spread across MPI nodes - bytes (num particles): [Min: 0 (0), Max: 0 (0), Total: 0 (0)]
bytespread [0, 0, 0]
ParticleContainer spread across MPI nodes - bytes: [Min: 0, Max: 0, Total: 0]
capacity [0, 0, 0]
number_of_particles_at_level(0) 0
-------------------------
define particle container
ParticleContainer spread across MPI nodes - bytes (num particles): [Min: 0 (0), Max: 0 (0), Total: 0 (0)]
bytespread [0, 0, 0]
ParticleContainer spread across MPI nodes - bytes: [Min: 0, Max: 0, Total: 0]
capacity [0, 0, 0]
number_of_particles_at_level(0) 0
---------------------------
add a particle to each grid
NumberOfParticles 8
Finest level = 0
Iterate particle boxes & set values
at level 0:
ParticleContainer spread across MPI nodes - bytes (num particles): [Min: 672 (8), Max: 672 (8), Total: 672 (8)]
bytespread [672, 672, 672]
ParticleContainer spread across MPI nodes - bytes: [Min: 2496, Max: 2496, Total: 2496]
capacity [2496, 2496, 2496]
number_of_particles_at_level(0) 8
---------------------------
call redistribute()
ParticleContainer spread across MPI nodes - bytes (num particles): [Min: 672 (8), Max: 672 (8), Total: 672 (8)]
bytespread [672, 672, 672]
ParticleContainer spread across MPI nodes - bytes: [Min: 760, Max: 760, Total: 760]
capacity [760, 760, 760]
number_of_particles_at_level(0) 8

This is the test code, for reference:

import importlib

import numpy as np
import pytest

import amrex.space3d as amr


def test_pc_init():
    # This test only runs on CPU or requires managed memory,
    # see https://github.com/cupy/cupy/issues/2031
    pc = (
        amr.ParticleContainer_2_1_3_1_managed()
        if amr.Config.have_gpu
        else amr.ParticleContainer_2_1_3_1_default()
    )

    print("bytespread", pc.byte_spread)
    print("capacity", pc.print_capacity())
    print("number_of_particles_at_level(0)",
          pc.number_of_particles_at_level(0))
    assert pc.number_of_particles_at_level(0) == 0

    bx = amr.Box(amr.IntVect(0, 0, 0), amr.IntVect(63, 63, 63))
    rb = amr.RealBox(0, 0, 0, 1, 1, 1)
    coord_int = 1  # RZ
    periodicity = [0, 0, 1]
    gm = amr.Geometry(bx, rb, coord_int, periodicity)

    ba = amr.BoxArray(bx)
    ba.max_size(32)
    dm = amr.DistributionMapping(ba)

    print("-------------------------")
    print("define particle container")
    pc.Define(gm, dm, ba)
    assert pc.OK()
    assert (
        pc.num_struct_real == amr.ParticleContainer_2_1_3_1_default.num_struct_real == 2
    )
    assert (
        pc.num_struct_int == amr.ParticleContainer_2_1_3_1_default.num_struct_int == 1
    )
    assert (
        pc.num_array_real == amr.ParticleContainer_2_1_3_1_default.num_array_real == 3
    )
    assert pc.num_array_int == amr.ParticleContainer_2_1_3_1_default.num_array_int == 1

    print("bytespread", pc.byte_spread)
    print("capacity", pc.print_capacity())
    print("number_of_particles_at_level(0)",
          pc.number_of_particles_at_level(0))
    assert pc.total_number_of_particles() == pc.number_of_particles_at_level(0) == 0
    assert pc.OK()

    print("---------------------------")
    print("add a particle to each grid")
    Npart_grid = 1
    iseed = 1
    myt = amr.ParticleInitType_2_1_3_1()
    myt.real_struct_data = [0.5, 0.4]
    myt.int_struct_data = [5]
    myt.real_array_data = [0.5, 0.2, 0.4]
    myt.int_array_data = [1]
    pc.init_random_per_box(Npart_grid, iseed, myt)
    ngrid = ba.size
    npart = Npart_grid * ngrid

    print("NumberOfParticles", pc.number_of_particles_at_level(0))
    assert pc.total_number_of_particles() == pc.number_of_particles_at_level(0) == npart
    assert pc.OK()

    print(f"Finest level = {pc.finest_level}")

    print("Iterate particle boxes & set values")
    # lvl = 0
    for lvl in range(pc.finest_level + 1):
        print(f"at level {lvl}:")
        for pti in pc.iterator(pc, level=lvl):
            # print("...")
            assert pti.num_particles == 1
            assert pti.num_real_particles == 1
            assert pti.num_neighbor_particles == 0
            assert pti.level == lvl
            # print(pti.pair_index)
            # print(pti.geom(level=lvl))

            # note: cupy does not yet support this
            # https://github.com/cupy/cupy/issues/2031
            aos = pti.aos()
            aos_arr = aos.to_numpy()
            aos_arr[0]["x"] = 0.30
            aos_arr[0]["y"] = 0.35
            aos_arr[0]["z"] = 0.40

            # TODO: this seems to write into a copy of the data
            soa = pti.soa()
            real_arrays = soa.get_real_data()
            int_arrays = soa.get_int_data()
            real_arrays[0] = [0.55]
            real_arrays[1] = [0.22]
            int_arrays[0] = [2]

            assert np.allclose(real_arrays[0], np.array([0.55]))
            assert np.allclose(real_arrays[1], np.array([0.22]))
            assert np.allclose(int_arrays[0], np.array([2]))

    print("bytespread", pc.byte_spread)
    print("capacity", pc.print_capacity())
    print("number_of_particles_at_level(0)",
          pc.number_of_particles_at_level(0))

    print("---------------------------")
    print("call redistribute()")

    pc.redistribute()
    print("bytespread", pc.byte_spread)
    print("capacity", pc.print_capacity())
    print("number_of_particles_at_level(0)",
          pc.number_of_particles_at_level(0))
@WeiqunZhang
Copy link
Member

How did you run? Did you use mpiexec? https://github.com/AMReX-Codes/pyamrex?tab=readme-ov-file#test

@matilde-t
Copy link
Author

I forgot about it, but now I ran mpiexec -np 2 python3 -m pytest -rfEsxX tests/test_particleContainer.py and on the original test I get

========================================================================================== test session starts ==========================================================================================
platform linux -- Python 3.13.0, pytest-7.4.4, pluggy-1.0.0
rootdir: /home/mati/Thesis/pyamrex/pyamrex
collecting ... ========================================================================================== test session starts ==========================================================================================
platform linux -- Python 3.13.0, pytest-7.4.4, pluggy-1.0.0
rootdir: /home/mati/Thesis/pyamrex/pyamrex
collected 13 items                                                                                                                                                                                      

collected 13 items                                                                                                                                                                                      

tests/test_particleContainer.py ..................ss....s                                                                                                                                                     [100%]

======================================================================================== short test summary info ========================================================================================
SKIPPED [1] tests/test_particleContainer.py:477: Requires AMReX_MPI=ON
SKIPPED [1] tests/test_particleContainer.py:542: Requires AMReX_MPI=ON
===================================================================================== 11 passed, 2 skipped in 1.07s =====================================================================================
AMReX (24.10) finalized
s                                                                                                                                                     [100%]

======================================================================================== short test summary info ========================================================================================
SKIPPED [1] tests/test_particleContainer.py:477: Requires AMReX_MPI=ON
SKIPPED [1] tests/test_particleContainer.py:542: Requires AMReX_MPI=ON
===================================================================================== 11 passed, 2 skipped in 1.09s =====================================================================================
AMReX (24.10) finalized

@WeiqunZhang
Copy link
Member

Maybe the default is nompi. Anyway you should be able install the mpi version explicitly. conda create -n pyamrex-mpi -c conda-forge pyamrex=*=mpi_*.

@matilde-t
Copy link
Author

Yes, installing the mpi version explicitly seems to work, thank you. Is there a reason why it's not the default?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants