-
Time Table for alpaka and OpenPMD Workshop on 23-25 October 2024
-
Presentations
-
Repository for hands-on exercises and presentations
-
Repositories:
-
alpaka Documentation:
-
OpenPMD Documentation
-
Others
- Support for heterogeneous architectures (e.g., CPUs, GPUs, FPGAs).
- Write once, run anywhere—alpaka abstracts hardware specifics for parallel computing.
- Install alpaka with correct cmake backend options
- Compile and run an alpaka example from the repository.
- Verify it runs on the available hardware (CPU, GPU, etc.).
- Grid Structure and WorkDivision
- Data Parallelism
- Indexing
- Memory Allocation, Padding and Pitch
- Use indexing to match thread to data
- Double accelerator-buffers method to solve Heat Equation
- Difference Equation as a stencil operation
- Accelerator, Device, Queue, Task
- Buffers and Views: Managing memory across devices
- alpaka mdspan
- Setting work division manually
- Hands-on 5: Using alpaka mdspan for easier indexing
- Hands-on 6: Domain Decomposition, chunking or tiling (Day 2)
- Hands-on 7: Using multiple Queues to increase performance. Explore overlap between computation and data transfer (Day 2)
- Hands-on 8: Using Shared Memory for chunksExplore overlap between computation and data transfer (Day 2)
- Measure the performance of your kernels and analyze the timing with and without shared memory
This session will introduce the participants to the scientific metadata format openPMD. The practical sessions and exercises will include the basic modeling of scientific data via the openPMD-api, possibilities for visualizing openPMD data, streaming I/O workflows, data compression, parallel I/O and more.
- First write with openPMD
- Modeling the heatEquation image as openPMD data
- Extending the heatEquation output with self-descriptive metadata
- Include: Physical units, grid geometry, timing information, author identification
- Use matplotlib to load the written file from disk and visualize the HeatEquation as it progresses
- Setup for Jupyter Notebook on LUMI
- Specification for I/O backend and their options (here: compression) from a JSON/TOML-formatted configuration file
- Visualizing the HeatEquation data again, this time without using the filesystem
- Instead: Read streamed data from the simulation as it progresses
- All that without changing more than one line of code (the filename), I/O options are otherwise steered via the JSON/TOML config
- Use the openPMD-viewer to view particle-mesh data produced by PIConGPU
- Post-hoc compression of uncompressed simulation data with
openpmd-pipe
- Optionally: Inspecting the same data with Paraview
- Memory-optimized API for workflows that require data preparation from custom internal data structures before output
- Extended modeling for openPMD data: constant components, particle patches
- Parallel filesystems
- Parallel I/O with ADIOS2: Aggregation strategies
- Parallel I/O with HDF5: Subfiling