Skip to content

Commit

Permalink
Move contrib metrics files (#3220)
Browse files Browse the repository at this point in the history
* move contrib

* make changes to ignite/metrics

* add tests and import fixes

* move docs for deprecated contrib.metrics

* move tests for deprecated contrib.metrics

* adjust references

* rename test modules

* fix version of deprecation

* fix doctest

* add deprecation warnings

* adjust precision of comparison in test

this test fails intermittently. the main difference between this branch and master is the a chance
difference in the order the tests are run which reliably triggers the failure.
  • Loading branch information
leej3 authored Mar 28, 2024
1 parent 8cf740d commit b8fc451
Show file tree
Hide file tree
Showing 82 changed files with 2,504 additions and 1,894 deletions.
19 changes: 11 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -241,7 +241,7 @@ def function_before_backprop(engine):
## Out-of-the-box metrics

- [Metrics](https://pytorch.org/ignite/metrics.html#complete-list-of-metrics) for various tasks:
Precision, Recall, Accuracy, Confusion Matrix, IoU etc, ~20 [regression metrics](https://pytorch.org/ignite/contrib/metrics.html#regression-metrics).
Precision, Recall, Accuracy, Confusion Matrix, IoU etc, ~20 [regression metrics](https://pytorch.org/ignite/metrics.html#complete-list-of-metrics).

- Users can also [compose their metrics](https://pytorch.org/ignite/metrics.html#metric-arithmetics) with ease from
existing ones using arithmetic operations or torch methods.
Expand Down Expand Up @@ -315,24 +315,27 @@ List of available pre-built images
</summary>

Base
- `pytorchignite/base:latest`

- `pytorchignite/base:latest`
- `pytorchignite/apex:latest`
- `pytorchignite/hvd-base:latest`
- `pytorchignite/hvd-apex:latest`
- `pytorchignite/hvd-apex:latest`
- `pytorchignite/msdp-apex:latest`

Vision:

- `pytorchignite/vision:latest`
- `pytorchignite/hvd-vision:latest`
- `pytorchignite/apex-vision:latest`
- `pytorchignite/hvd-apex-vision:latest`
- `pytorchignite/msdp-apex-vision:latest`

NLP:

- `pytorchignite/nlp:latest`
- `pytorchignite/hvd-nlp:latest`
- `pytorchignite/apex-nlp:latest`
- `pytorchignite/hvd-apex-nlp:latest`
- `pytorchignite/apex-nlp:latest`
- `pytorchignite/hvd-apex-nlp:latest`
- `pytorchignite/msdp-apex-nlp:latest`

</details>
Expand Down Expand Up @@ -416,8 +419,8 @@ Features:
## Code-Generator application

The easiest way to create your training scripts with PyTorch-Ignite:
- https://code-generator.pytorch-ignite.ai/

- https://code-generator.pytorch-ignite.ai/

<!-- ############################################################################################################### -->

Expand Down Expand Up @@ -502,7 +505,7 @@ Blog articles, tutorials, books
- [The Hero Rises: Build Your Own SSD](https://allegro.ai/blog/the-hero-rises-build-your-own-ssd/)
- [Using Optuna to Optimize PyTorch Ignite Hyperparameters](https://medium.com/pytorch/using-optuna-to-optimize-pytorch-ignite-hyperparameters-626ffe6d4783)
- [PyTorch Ignite - Classifying Tiny ImageNet with EfficientNet](https://towardsdatascience.com/pytorch-ignite-classifying-tiny-imagenet-with-efficientnet-e5b1768e5e8f)

</details>

<details>
Expand All @@ -516,7 +519,7 @@ Toolkits
- [Nussl - a flexible, object-oriented Python audio source separation library](https://github.com/nussl/nussl)
- [PyTorch Adapt - A fully featured and modular domain adaptation library](https://github.com/KevinMusgrave/pytorch-adapt)
- [gnina-torch: PyTorch implementation of GNINA scoring function](https://github.com/RMeli/gnina-torch)

</details>

<details>
Expand Down
2 changes: 1 addition & 1 deletion docs/source/contrib/handlers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,5 +28,5 @@ Time profilers [deprecated]
Loggers [deprecated]
--------------------

.. deprecated:: 0.4.14
.. deprecated:: 0.5.1
Loggers moved to :ref:`Loggers`.
59 changes: 9 additions & 50 deletions docs/source/contrib/metrics.rst
Original file line number Diff line number Diff line change
@@ -1,56 +1,15 @@
ignite.contrib.metrics
======================
=======================

Contrib module metrics
----------------------
Contrib module metrics [deprecated]
-----------------------------------

.. currentmodule:: ignite.contrib.metrics
.. deprecated:: 0.5.1
All metrics moved to :ref:`Complete list of metrics`.

.. autosummary::
:nosignatures:
:toctree: ../generated

AveragePrecision
CohenKappa
GpuInfo
PrecisionRecallCurve
ROC_AUC
RocCurve
Regression metrics [deprecated]
--------------------------------

Regression metrics
------------------

.. currentmodule:: ignite.contrib.metrics.regression

.. automodule:: ignite.contrib.metrics.regression


Module :mod:`ignite.contrib.metrics.regression` provides implementations of
metrics useful for regression tasks. Definitions of metrics are based on `Botchkarev 2018`_, page 30 "Appendix 2. Metrics mathematical definitions".

.. _`Botchkarev 2018`:
https://arxiv.org/abs/1809.03006

Complete list of metrics:

.. currentmodule:: ignite.contrib.metrics.regression

.. autosummary::
:nosignatures:
:toctree: ../generated

CanberraMetric
FractionalAbsoluteError
FractionalBias
GeometricMeanAbsoluteError
GeometricMeanRelativeAbsoluteError
ManhattanDistance
MaximumAbsoluteError
MeanAbsoluteRelativeError
MeanError
MeanNormalizedBias
MedianAbsoluteError
MedianAbsolutePercentageError
MedianRelativeAbsoluteError
R2Score
WaveHedgesDistance
.. deprecated:: 0.5.1
All metrics moved to :ref:`Complete list of metrics`.
5 changes: 2 additions & 3 deletions docs/source/defaults.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,8 @@
from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.metrics.regression import *
from ignite.utils import *
from ignite.contrib.metrics.regression import *
from ignite.contrib.metrics import *

# create default evaluator for doctests

Expand Down Expand Up @@ -46,4 +45,4 @@
('fc', nn.Linear(2, 1))
]))

manual_seed(666)
manual_seed(666)
32 changes: 32 additions & 0 deletions docs/source/metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -352,6 +352,35 @@ Complete list of metrics
FID
CosineSimilarity
Entropy
AveragePrecision
CohenKappa
GpuInfo
PrecisionRecallCurve
RocCurve
ROC_AUC
regression.CanberraMetric
regression.FractionalAbsoluteError
regression.FractionalBias
regression.GeometricMeanAbsoluteError
regression.GeometricMeanRelativeAbsoluteError
regression.ManhattanDistance
regression.MaximumAbsoluteError
regression.MeanAbsoluteRelativeError
regression.MeanError
regression.MeanNormalizedBias
regression.MedianAbsoluteError
regression.MedianAbsolutePercentageError
regression.MedianRelativeAbsoluteError
regression.R2Score
regression.WaveHedgesDistance


.. note::

Module ignite.metrics.regression provides implementations of metrics useful
for regression tasks. Definitions of metrics are based on
`Botchkarev 2018`_, page 30 "Appendix 2. Metrics mathematical definitions".


Helpers for customizing metrics
-------------------------------
Expand Down Expand Up @@ -393,3 +422,6 @@ reinit__is_reduced
sync_all_reduce
~~~~~~~~~~~~~~~
.. autofunction:: sync_all_reduce

.. _`Botchkarev 2018`:
https://arxiv.org/abs/1809.03006
2 changes: 1 addition & 1 deletion examples/mnist/mnist_with_tensorboard_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ def run(train_batch_size, val_batch_size, epochs, lr, momentum, log_dir):
trainer.logger = setup_logger("Trainer")

if sys.version_info > (3,):
from ignite.contrib.metrics.gpu_info import GpuInfo
from ignite.metrics.gpu_info import GpuInfo

try:
GpuInfo().attach(trainer)
Expand Down
2 changes: 1 addition & 1 deletion examples/mnist/mnist_with_tqdm_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def run(train_batch_size, val_batch_size, epochs, lr, momentum, display_gpu_info
RunningAverage(output_transform=lambda x: x).attach(trainer, "loss")

if display_gpu_info:
from ignite.contrib.metrics import GpuInfo
from ignite.metrics import GpuInfo

GpuInfo().attach(trainer, name="gpu")

Expand Down
7 changes: 3 additions & 4 deletions ignite/contrib/engines/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
from torch.optim.lr_scheduler import _LRScheduler as PyTorchLRScheduler

import ignite.distributed as idist
from ignite.contrib.metrics import GpuInfo
from ignite.engine import Engine, Events
from ignite.handlers import (
Checkpoint,
Expand All @@ -35,7 +34,7 @@
from ignite.handlers.base_logger import BaseLogger
from ignite.handlers.checkpoint import BaseSaveHandler
from ignite.handlers.param_scheduler import ParamScheduler
from ignite.metrics import RunningAverage
from ignite.metrics import GpuInfo, RunningAverage
from ignite.metrics.metric import RunningBatchWise
from ignite.utils import deprecated

Expand Down Expand Up @@ -78,14 +77,14 @@ def setup_common_training_handlers(
exclusive with ``save_handler``.
lr_scheduler: learning rate scheduler
as native torch LRScheduler or ignite's parameter scheduler.
with_gpu_stats: if True, :class:`~ignite.contrib.metrics.GpuInfo` is attached to the
with_gpu_stats: if True, :class:`~ignite.metrics.GpuInfo` is attached to the
trainer. This requires `pynvml` package to be installed.
output_names: list of names associated with `update_function` output dictionary.
with_pbars: if True, two progress bars on epochs and optionally on iterations are attached.
Default, True.
with_pbar_on_iters: if True, a progress bar on iterations is attached to the trainer.
Default, True.
log_every_iters: logging interval for :class:`~ignite.contrib.metrics.GpuInfo` and for
log_every_iters: logging interval for :class:`~ignite.metrics.GpuInfo` and for
epoch-wise progress bar. Default, 100.
stop_on_nan: if True, :class:`~ignite.handlers.terminate_on_nan.TerminateOnNan` handler is added to the trainer.
Default, True.
Expand Down
13 changes: 7 additions & 6 deletions ignite/contrib/metrics/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import ignite.contrib.metrics.regression
from ignite.contrib.metrics.average_precision import AveragePrecision
from ignite.contrib.metrics.cohen_kappa import CohenKappa
from ignite.contrib.metrics.gpu_info import GpuInfo
from ignite.contrib.metrics.precision_recall_curve import PrecisionRecallCurve
from ignite.contrib.metrics.roc_auc import ROC_AUC, RocCurve
import ignite.metrics.regression
from ignite.metrics import average_precision, cohen_kappa, gpu_info, precision_recall_curve, roc_auc
from ignite.metrics.average_precision import AveragePrecision
from ignite.metrics.cohen_kappa import CohenKappa
from ignite.metrics.gpu_info import GpuInfo
from ignite.metrics.precision_recall_curve import PrecisionRecallCurve
from ignite.metrics.roc_auc import ROC_AUC, RocCurve
103 changes: 22 additions & 81 deletions ignite/contrib/metrics/average_precision.py
Original file line number Diff line number Diff line change
@@ -1,81 +1,22 @@
from typing import Callable, Union

import torch

from ignite.metrics import EpochMetric


def average_precision_compute_fn(y_preds: torch.Tensor, y_targets: torch.Tensor) -> float:
from sklearn.metrics import average_precision_score

y_true = y_targets.cpu().numpy()
y_pred = y_preds.cpu().numpy()
return average_precision_score(y_true, y_pred)


class AveragePrecision(EpochMetric):
"""Computes Average Precision accumulating predictions and the ground-truth during an epoch
and applying `sklearn.metrics.average_precision_score <https://scikit-learn.org/stable/modules/generated/
sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score>`_ .
Args:
output_transform: a callable that is used to transform the
:class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
form expected by the metric. This can be useful if, for example, you have a multi-output model and
you want to compute the metric with respect to one of the outputs.
check_compute_fn: Default False. If True, `average_precision_score
<https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html
#sklearn.metrics.average_precision_score>`_ is run on the first batch of data to ensure there are
no issues. User will be warned in case there are any issues computing the function.
device: optional device specification for internal storage.
Note:
AveragePrecision expects y to be comprised of 0's and 1's. y_pred must either be probability estimates or
confidence values. To apply an activation to y_pred, use output_transform as shown below:
.. code-block:: python
def activated_output_transform(output):
y_pred, y = output
y_pred = torch.softmax(y_pred, dim=1)
return y_pred, y
avg_precision = AveragePrecision(activated_output_transform)
Examples:
.. include:: defaults.rst
:start-after: :orphan:
.. testcode::
y_pred = torch.tensor([[0.79, 0.21], [0.30, 0.70], [0.46, 0.54], [0.16, 0.84]])
y_true = torch.tensor([[1, 1], [1, 1], [0, 1], [0, 1]])
avg_precision = AveragePrecision()
avg_precision.attach(default_evaluator, 'average_precision')
state = default_evaluator.run([[y_pred, y_true]])
print(state.metrics['average_precision'])
.. testoutput::
0.9166...
"""

def __init__(
self,
output_transform: Callable = lambda x: x,
check_compute_fn: bool = False,
device: Union[str, torch.device] = torch.device("cpu"),
):
try:
from sklearn.metrics import average_precision_score # noqa: F401
except ImportError:
raise ModuleNotFoundError("This contrib module requires scikit-learn to be installed.")

super(AveragePrecision, self).__init__(
average_precision_compute_fn,
output_transform=output_transform,
check_compute_fn=check_compute_fn,
device=device,
)
""" ``ignite.contrib.metrics.average_precision`` was moved to ``ignite.metrics.average_precision``.
Note:
``ignite.contrib.metrics.average_precision`` was moved to ``ignite.metrics.average_precision``.
Please refer to :mod:`~ignite.metrics.average_precision`.
"""

import warnings

removed_in = "0.6.0"
deprecation_warning = (
f"{__file__} has been moved to /ignite/metrics/average_precision.py"
+ (f" and will be removed in version {removed_in}" if removed_in else "")
+ ".\n Please refer to the documentation for more details."
)
warnings.warn(deprecation_warning, DeprecationWarning, stacklevel=2)
from ignite.metrics.average_precision import AveragePrecision

__all__ = [
"AveragePrecision",
]

AveragePrecision = AveragePrecision
Loading

0 comments on commit b8fc451

Please sign in to comment.