Skip to content

Commit

Permalink
Merge branch 'main' into parallel-concatenate
Browse files Browse the repository at this point in the history
  • Loading branch information
bouweandela authored Jun 10, 2024
2 parents 2540fea + 6adac1b commit 3bfea80
Show file tree
Hide file tree
Showing 35 changed files with 924 additions and 440 deletions.
8 changes: 4 additions & 4 deletions .github/workflows/benchmarks_run.yml
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,9 @@ jobs:
with:
fetch-depth: 0

- name: Install ASV & Nox
- name: Install Nox
run: |
pip install asv nox
pip install nox
- name: Cache environment directories
id: cache-env-dir
Expand Down Expand Up @@ -112,7 +112,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.number }}
run: |
python benchmarks/bm_runner.py branch origin/${{ github.base_ref }}
nox -s benchmarks -- branch origin/${{ github.base_ref }}
- name: Run overnight benchmarks
id: overnight
Expand All @@ -128,7 +128,7 @@ jobs:
if [ "$first_commit" != "" ]
then
python benchmarks/bm_runner.py overnight $first_commit
nox -s benchmarks -- overnight $first_commit
fi
- name: Warn of failure
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/ci-manifest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ concurrency:
jobs:
manifest:
name: "check-manifest"
uses: scitools/workflows/.github/workflows/ci-manifest.yml@2024.05.0
uses: scitools/workflows/.github/workflows/ci-manifest.yml@2024.06.0
2 changes: 1 addition & 1 deletion .github/workflows/refresh-lockfiles.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,5 +14,5 @@ on:

jobs:
refresh_lockfiles:
uses: scitools/workflows/.github/workflows/refresh-lockfiles.yml@2024.05.0
uses: scitools/workflows/.github/workflows/refresh-lockfiles.yml@2024.06.0
secrets: inherit
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ repos:
- id: no-commit-to-branch

- repo: https://github.com/astral-sh/ruff-pre-commit
rev: "v0.4.4"
rev: "v0.4.7"
hooks:
- id: ruff
types: [file, python]
Expand All @@ -38,7 +38,7 @@ repos:
types: [file, python]

- repo: https://github.com/codespell-project/codespell
rev: "v2.2.6"
rev: "v2.3.0"
hooks:
- id: codespell
types_or: [asciidoc, python, markdown, rst]
Expand Down
22 changes: 14 additions & 8 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,13 @@ the PR's base branch, thus showing performance differences introduced
by the PR. (This run is managed by
[the aforementioned GitHub Action](../.github/workflows/benchmark.yml)).

`asv ...` commands must be run from this directory. You will need to have ASV
installed, as well as Nox (see
[Benchmark environments](#benchmark-environments)).

The benchmark runner ([bm_runner.py](./bm_runner.py)) provides conveniences for
To run locally: the **benchmark runner** provides conveniences for
common benchmark setup and run tasks, including replicating the automated
overnight run locally. See `python bm_runner.py --help` for detail.
overnight run locally. This is accessed via the Nox `benchmarks` session - see
`nox -s benchmarks -- --help` for detail (_see also:
[bm_runner.py](./bm_runner.py)_). Alternatively you can directly run `asv ...`
commands from this directory (you will still need Nox installed - see
[Benchmark environments](#benchmark-environments)).

A significant portion of benchmark run time is environment management. Run-time
can be reduced by placing the benchmark environment on the same file system as
Expand All @@ -43,11 +43,17 @@ if it is not already. You can achieve this by either:

* `OVERRIDE_TEST_DATA_REPOSITORY` - required - some benchmarks use
`iris-test-data` content, and your local `site.cfg` is not available for
benchmark scripts.
benchmark scripts. The benchmark runner defers to any value already set in
the shell, but will otherwise download `iris-test-data` and set the variable
accordingly.
* `DATA_GEN_PYTHON` - required - path to a Python executable that can be
used to generate benchmark test objects/files; see
[Data generation](#data-generation). The benchmark runner sets this
automatically, but will defer to any value already set in the shell.
automatically, but will defer to any value already set in the shell. Note that
[Mule](https://github.com/metomi/mule) will be automatically installed into
this environment, and sometimes
[iris-test-data](https://github.com/SciTools/iris-test-data) (see
`OVERRIDE_TEST_DATA_REPOSITORY`).
* `BENCHMARK_DATA` - optional - path to a directory for benchmark synthetic
test data, which the benchmark scripts will create if it doesn't already
exist. Defaults to `<root>/benchmarks/.data/` if not set. Note that some of
Expand Down
54 changes: 45 additions & 9 deletions benchmarks/benchmarks/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,9 @@
"""Common code for benchmarks."""

from os import environ
import resource
import tracemalloc

import numpy as np


def disable_repeat_between_setup(benchmark_object):
Expand Down Expand Up @@ -61,27 +63,34 @@ class TrackAddedMemoryAllocation:
AVD's detection threshold and be treated as 'signal'. Results
smaller than this value will therefore be returned as equal to this
value, ensuring fractionally small noise / no noise at all.
Defaults to 1.0
RESULT_ROUND_DP : int
Number of decimal places of rounding on result values (in Mb).
Defaults to 1
"""

RESULT_MINIMUM_MB = 5.0

@staticmethod
def process_resident_memory_mb():
return resource.getrusage(resource.RUSAGE_SELF).ru_maxrss / 1024.0
RESULT_MINIMUM_MB = 0.2
RESULT_ROUND_DP = 1 # I.E. to nearest 0.1 Mb

def __enter__(self):
self.mb_before = self.process_resident_memory_mb()
tracemalloc.start()
return self

def __exit__(self, *_):
self.mb_after = self.process_resident_memory_mb()
_, peak_mem_bytes = tracemalloc.get_traced_memory()
tracemalloc.stop()
# Save peak-memory allocation, scaled from bytes to Mb.
self._peak_mb = peak_mem_bytes * (2.0**-20)

def addedmem_mb(self):
"""Return measured memory growth, in Mb."""
result = self.mb_after - self.mb_before
result = self._peak_mb
# Small results are too vulnerable to noise being interpreted as signal.
result = max(self.RESULT_MINIMUM_MB, result)
# Rounding makes results easier to read.
result = np.round(result, self.RESULT_ROUND_DP)
return result

@staticmethod
Expand All @@ -105,6 +114,33 @@ def _wrapper(*args, **kwargs):
decorated_func.unit = "Mb"
return _wrapper

@staticmethod
def decorator_repeating(repeats=3):
"""Benchmark to track growth in resident memory during execution.
Tracks memory for repeated calls of decorated function.
Intended for use on ASV ``track_`` benchmarks. Applies the
:class:`TrackAddedMemoryAllocation` context manager to the benchmark
code, sets the benchmark ``unit`` attribute to ``Mb``.
"""

def decorator(decorated_func):
def _wrapper(*args, **kwargs):
assert decorated_func.__name__[:6] == "track_"
# Run the decorated benchmark within the added memory context
# manager.
with TrackAddedMemoryAllocation() as mb:
for _ in range(repeats):
decorated_func(*args, **kwargs)
return mb.addedmem_mb()

decorated_func.unit = "Mb"
return _wrapper

return decorator


def on_demand_benchmark(benchmark_object):
"""Disable these benchmark(s) unless ON_DEMAND_BENCHARKS env var is set.
Expand Down
Loading

0 comments on commit 3bfea80

Please sign in to comment.