Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(chore): adopt scientific python deprecation schedule #1768

Merged
merged 44 commits into from
Feb 18, 2025
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
f2af154
(chore): migrate to only checking `cs{r,c}_matrix` instead of `spmatrix`
ilan-gold Nov 14, 2024
4f97787
(chore): alter tests as well
ilan-gold Nov 14, 2024
8ea1c4e
(chore): release note
ilan-gold Nov 14, 2024
0253072
(chore): update for scientific python deprecation schedule
ilan-gold Nov 15, 2024
b547a2c
(chore): change name in azure pipeline
ilan-gold Nov 15, 2024
fb686af
(fix): use `https`
ilan-gold Nov 15, 2024
b891b7c
(fix): remove `git@`
ilan-gold Nov 15, 2024
7ab8eee
(chore): remove py3.11 checks
ilan-gold Nov 15, 2024
94c1cdf
(fix): remove scanpy for now
ilan-gold Nov 15, 2024
5d45a79
Merge branch 'main' into ig/scientific_python_deprecation_schedule
ilan-gold Jan 29, 2025
2e53c19
Merge branch 'main' into ig/scientific_python_deprecation_schedule
ilan-gold Jan 29, 2025
51c26a4
(fix): correct h5py version
ilan-gold Jan 29, 2025
7286691
(fix): h5py 3.8
ilan-gold Jan 29, 2025
b77beaa
(fix): respect `pyproject.toml` max versions
ilan-gold Jan 29, 2025
3f18c10
(fix): no `asv` for python 3.13
ilan-gold Jan 29, 2025
c8fedea
(fix): install command
ilan-gold Jan 29, 2025
2e705f8
(fix): path re-route
ilan-gold Jan 29, 2025
23ca307
(fix): remove `add_note`
ilan-gold Jan 29, 2025
f2200ba
(fix): gpu ci
ilan-gold Jan 29, 2025
00e972f
(fix): no `eye_array` in 1.11
ilan-gold Jan 31, 2025
3cd8330
(fix): filter h5py-numpy deprecation
ilan-gold Jan 31, 2025
5768b12
(fix): concatenation bugs
ilan-gold Feb 2, 2025
ee7faa9
(fix): still an issue with matrices!
ilan-gold Feb 2, 2025
60e1149
(fix): minimum version needs to be higher of dask
ilan-gold Feb 3, 2025
06d11e3
(fix): try different constraint
ilan-gold Feb 3, 2025
94d08eb
(chore): remove `CAN_USE_SPARSE_ARRAY`
ilan-gold Feb 3, 2025
296b6c9
(chore): remove duplicated lines
ilan-gold Feb 3, 2025
2740341
(chore): release note
ilan-gold Feb 3, 2025
3f0afae
(fix): bring back skip for array allocation
ilan-gold Feb 3, 2025
3351c43
(fix): PR for relnote
ilan-gold Feb 3, 2025
81f893f
(chore): remove `pandas` 1.X comment
ilan-gold Feb 17, 2025
73f1252
(refactor): `Sp{Matrix,Array}` -> `CS{Matrix,Array}`
ilan-gold Feb 17, 2025
e5d2394
(fix): include `CSArray` in mutable mapping subclasses
ilan-gold Feb 17, 2025
95d18f5
(fix): add back in scalar type for `__setitem__`
ilan-gold Feb 17, 2025
0e914b6
(chore): remove `stdlib` header
ilan-gold Feb 17, 2025
71ca506
(fix): remove `allow-direct-references`
ilan-gold Feb 17, 2025
0250831
(fix): pass through reqs unchanged in `min-deps`
ilan-gold Feb 17, 2025
5b50436
(fix): remove exceptiongroup
ilan-gold Feb 17, 2025
1211e44
Merge branch 'main' into ig/scientific_python_deprecation_schedule
ilan-gold Feb 17, 2025
22ed422
(fix): dask sparse array bound
ilan-gold Feb 17, 2025
29838ed
Merge branch 'ig/scientific_python_deprecation_schedule' of github.co…
ilan-gold Feb 17, 2025
7a42a1f
(chore): add comment about dask bound
ilan-gold Feb 17, 2025
ba154db
(fix): correct minimum dask version
ilan-gold Feb 17, 2025
8b5048e
Merge branch 'main' into ig/scientific_python_deprecation_schedule
ilan-gold Feb 18, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions .azure-pipelines.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,18 +14,18 @@ jobs:
vmImage: "ubuntu-22.04"
strategy:
matrix:
Python3.12:
python.version: "3.12"
Python3.13:
python.version: "3.13"
RUN_COVERAGE: yes
TEST_TYPE: "coverage"
Python3.10:
python.version: "3.10"
Python3.11:
python.version: "3.11"
PreRelease:
python.version: "3.12"
python.version: "3.13"
DEPENDENCIES_VERSION: "pre-release"
TEST_TYPE: "strict-warning"
minimum_versions:
python.version: "3.10"
python.version: "3.11"
DEPENDENCIES_VERSION: "minimum"
TEST_TYPE: "coverage"
steps:
Expand Down Expand Up @@ -57,7 +57,7 @@ jobs:
set -e
uv pip install --system --compile tomli packaging
deps=`python3 ci/scripts/min-deps.py pyproject.toml --extra dev test`
uv pip install --system --compile $deps pytest-cov "anndata @ ."
uv pip install --system --compile $deps pytest-cov "anndata[test,dev] @ ."
displayName: "Install minimum dependencies"
condition: eq(variables['DEPENDENCIES_VERSION'], 'minimum')

Expand Down Expand Up @@ -104,8 +104,8 @@ jobs:
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: "3.12"
displayName: "Use Python 3.12"
versionSpec: "3.13"
displayName: "Use Python 3.13"

- script: |
set -e
Expand Down
2 changes: 1 addition & 1 deletion .readthedocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ version: 2
build:
os: ubuntu-20.04
tools:
python: "3.12"
python: "3.13"
jobs:
post_checkout:
# unshallow so version can be derived from tag
Expand Down
6 changes: 1 addition & 5 deletions ci/scripts/min-deps.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,13 @@

import argparse
import sys
import tomllib
from collections import deque
from contextlib import ExitStack
from functools import cached_property
from pathlib import Path
from typing import TYPE_CHECKING

if sys.version_info >= (3, 11):
import tomllib
else:
import tomli as tomllib

from packaging.requirements import Requirement
from packaging.version import Version

Expand Down
1 change: 1 addition & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,6 +137,7 @@ def setup(app: Sphinx):
"pandas.DataFrame.loc": ("py:attr", "pandas.DataFrame.loc"),
# should be fixed soon: https://github.com/tox-dev/sphinx-autodoc-typehints/pull/516
"types.EllipsisType": ("py:data", "types.EllipsisType"),
"pathlib._local.Path": "pathlib.Path",
}
autodoc_type_aliases = dict(
NDArray=":data:`~numpy.typing.NDArray`",
Expand Down
1 change: 1 addition & 0 deletions docs/release-notes/1767.breaking.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Tighten usage of {class}`scipy.sparse.spmatrix` for describing sparse matrices in types and instance checks to only {class}`scipy.sparse.csr_matrix` and {class}`scipy.sparse.csc_matrix` {user}`ilan-gold`
4 changes: 2 additions & 2 deletions hatch.toml
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ overrides.matrix.deps.pre-install-commands = [
{ if = ["min"], value = "uv run ci/scripts/min-deps.py pyproject.toml --all-extras -o ci/min-deps.txt" },
]
overrides.matrix.deps.python = [
{ if = ["min"], value = "3.10" },
{ if = ["stable", "pre"], value = "3.12" },
{ if = ["min"], value = "3.11" },
{ if = ["stable", "pre"], value = "3.13" },
]

[[envs.hatch-test.matrix]]
Expand Down
14 changes: 8 additions & 6 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ requires = ["hatchling", "hatch-vcs"]
[project]
name = "anndata"
description = "Annotated data."
requires-python = ">=3.10"
requires-python = ">=3.11"
license = "BSD-3-Clause"
authors = [
{ name = "Philipp Angerer" },
Expand All @@ -29,20 +29,20 @@ classifiers = [
"Operating System :: Microsoft :: Windows",
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Scientific/Engineering :: Bio-Informatics",
"Topic :: Scientific/Engineering :: Visualization",
]
dependencies = [
# pandas <1.4 has pandas/issues/35446
# pandas 2.1.0rc0 has pandas/issues/54622
"pandas >=1.4, !=2.1.0rc0, !=2.1.2",
"numpy>=1.23",
"pandas >=2.0.0, !=2.1.0rc0, !=2.1.2",
"numpy>=1.25",
# https://github.com/scverse/anndata/issues/1434
"scipy >1.8",
"h5py>=3.7",
"scipy >1.11",
"h5py>=3.8",
"exceptiongroup; python_version<'3.11'",
"natsort",
"packaging>=24.2",
Expand Down Expand Up @@ -115,6 +115,8 @@ source = "vcs"
raw-options.version_scheme = "release-branch-semver"
[tool.hatch.build.targets.wheel]
packages = ["src/anndata", "src/testing"]
[tool.hatch.metadata]
allow-direct-references = true

[tool.coverage.run]
data_file = "test-data/coverage"
Expand Down
5 changes: 2 additions & 3 deletions src/anndata/_core/aligned_mapping.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,9 @@

import numpy as np
import pandas as pd
from scipy.sparse import spmatrix

from .._warnings import ExperimentalFeatureWarning, ImplicitModificationWarning
from ..compat import AwkArray
from ..compat import AwkArray, SpMatrix
from ..utils import (
axis_len,
convert_to_dict,
Expand All @@ -36,7 +35,7 @@
OneDIdx = Sequence[int] | Sequence[bool] | slice
TwoDIdx = tuple[OneDIdx, OneDIdx]
# TODO: pd.DataFrame only allowed in AxisArrays?
Value = pd.DataFrame | spmatrix | np.ndarray
Value = pd.DataFrame | SpMatrix | np.ndarray

P = TypeVar("P", bound="AlignedMappingBase")
"""Parent mapping an AlignedView is based on."""
Expand Down
8 changes: 4 additions & 4 deletions src/anndata/_core/anndata.py
Original file line number Diff line number Diff line change
Expand Up @@ -231,14 +231,14 @@ class AnnData(metaclass=utils.DeprecationMixinMeta):
)
def __init__(
self,
X: np.ndarray | sparse.spmatrix | pd.DataFrame | None = None,
X: ArrayDataStructureType | pd.DataFrame | None = None,
obs: pd.DataFrame | Mapping[str, Iterable[Any]] | None = None,
var: pd.DataFrame | Mapping[str, Iterable[Any]] | None = None,
uns: Mapping[str, Any] | None = None,
*,
obsm: np.ndarray | Mapping[str, Sequence[Any]] | None = None,
varm: np.ndarray | Mapping[str, Sequence[Any]] | None = None,
layers: Mapping[str, np.ndarray | sparse.spmatrix] | None = None,
layers: Mapping[str, ArrayDataStructureType] | None = None,
raw: Mapping[str, Any] | None = None,
dtype: np.dtype | type | str | None = None,
shape: tuple[int, int] | None = None,
Expand Down Expand Up @@ -592,7 +592,7 @@ def X(self) -> ArrayDataStructureType | None:
# return X

@X.setter
def X(self, value: np.ndarray | sparse.spmatrix | SpArray | None):
def X(self, value: ArrayDataStructureType | None):
if value is None:
if self.isbacked:
msg = "Cannot currently remove data matrix from backed object."
Expand Down Expand Up @@ -1189,7 +1189,7 @@ def _inplace_subset_obs(self, index: Index1D):
self._init_as_actual(adata_subset)

# TODO: Update, possibly remove
def __setitem__(self, index: Index, val: float | np.ndarray | sparse.spmatrix):
def __setitem__(self, index: Index, val: ArrayDataStructureType):
if self.is_view:
msg = "Object is view and cannot be accessed with `[]`."
raise ValueError(msg)
Expand Down
12 changes: 6 additions & 6 deletions src/anndata/_core/index.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
import h5py
import numpy as np
import pandas as pd
from scipy.sparse import issparse, spmatrix
from scipy.sparse import issparse

from ..compat import AwkArray, DaskArray, SpArray
from ..compat import AwkArray, DaskArray, SpArray, SpMatrix

if TYPE_CHECKING:
from ..compat import Index, Index1D
Expand Down Expand Up @@ -69,13 +69,13 @@ def name_idx(i):
elif isinstance(indexer, str):
return index.get_loc(indexer) # int
elif isinstance(
indexer, Sequence | np.ndarray | pd.Index | spmatrix | np.matrix | SpArray
indexer, Sequence | np.ndarray | pd.Index | SpMatrix | np.matrix | SpArray
):
if hasattr(indexer, "shape") and (
(indexer.shape == (index.shape[0], 1))
or (indexer.shape == (1, index.shape[0]))
):
if isinstance(indexer, spmatrix | SpArray):
if isinstance(indexer, SpMatrix | SpArray):
indexer = indexer.toarray()
indexer = np.ravel(indexer)
if not isinstance(indexer, np.ndarray | pd.Index):
Expand Down Expand Up @@ -180,9 +180,9 @@ def _subset_dask(a: DaskArray, subset_idx: Index):
return a[subset_idx]


@_subset.register(spmatrix)
@_subset.register(SpMatrix)
@_subset.register(SpArray)
def _subset_sparse(a: spmatrix | SpArray, subset_idx: Index):
def _subset_sparse(a: SpMatrix | SpArray, subset_idx: Index):
# Correcting for indexing behaviour of sparse.spmatrix
if len(subset_idx) > 1 and all(isinstance(x, Iterable) for x in subset_idx):
first_idx = subset_idx[0]
Expand Down
27 changes: 12 additions & 15 deletions src/anndata/_core/merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
import pandas as pd
from natsort import natsorted
from scipy import sparse
from scipy.sparse import spmatrix

from anndata._warnings import ExperimentalFeatureWarning

Expand All @@ -29,6 +28,7 @@
CupySparseMatrix,
DaskArray,
SpArray,
SpMatrix,
_map_cat_to_str,
)
from ..utils import asarray, axis_len, warn_once
Expand Down Expand Up @@ -135,7 +135,7 @@ def equal_dask_array(a, b) -> bool:
if isinstance(b, DaskArray):
if tokenize(a) == tokenize(b):
return True
if isinstance(a._meta, spmatrix):
if isinstance(a._meta, SpMatrix):
# TODO: Maybe also do this in the other case?
return da.map_blocks(equal, a, b, drop_axis=(0, 1)).all()
else:
Expand Down Expand Up @@ -165,7 +165,7 @@ def equal_series(a, b) -> bool:
return a.equals(b)


@equal.register(sparse.spmatrix)
@equal.register(SpMatrix)
@equal.register(SpArray)
@equal.register(CupySparseMatrix)
def equal_sparse(a, b) -> bool:
Expand All @@ -174,7 +174,7 @@ def equal_sparse(a, b) -> bool:

xp = array_api_compat.array_namespace(a.data)

if isinstance(b, CupySparseMatrix | sparse.spmatrix | SpArray):
if isinstance(b, CupySparseMatrix | SpMatrix | SpArray):
if isinstance(a, CupySparseMatrix):
# Comparison broken for CSC matrices
# https://github.com/cupy/cupy/issues/7757
Expand Down Expand Up @@ -205,8 +205,8 @@ def equal_awkward(a, b) -> bool:
return ak.almost_equal(a, b)


def as_sparse(x, *, use_sparse_array: bool = False):
if not isinstance(x, sparse.spmatrix | SpArray):
def as_sparse(x, *, use_sparse_array=False):
if not isinstance(x, SpMatrix | SpArray):
if CAN_USE_SPARSE_ARRAY and use_sparse_array:
return sparse.csr_array(x)
return sparse.csr_matrix(x)
Expand Down Expand Up @@ -537,7 +537,7 @@ def apply(self, el, *, axis, fill_value=None):
return el
if isinstance(el, pd.DataFrame):
return self._apply_to_df(el, axis=axis, fill_value=fill_value)
elif isinstance(el, sparse.spmatrix | SpArray | CupySparseMatrix):
elif isinstance(el, SpMatrix | SpArray | CupySparseMatrix):
return self._apply_to_sparse(el, axis=axis, fill_value=fill_value)
elif isinstance(el, AwkArray):
return self._apply_to_awkward(el, axis=axis, fill_value=fill_value)
Expand Down Expand Up @@ -615,8 +615,8 @@ def _apply_to_array(self, el, *, axis, fill_value=None):
)

def _apply_to_sparse(
self, el: sparse.spmatrix | SpArray, *, axis, fill_value=None
) -> spmatrix:
self, el: SpMatrix | SpArray, *, axis, fill_value=None
) -> SpMatrix:
if isinstance(el, CupySparseMatrix):
from cupyx.scipy import sparse
else:
Expand Down Expand Up @@ -726,11 +726,8 @@ def default_fill_value(els):
This is largely due to backwards compat, and might not be the ideal solution.
"""
if any(
isinstance(el, sparse.spmatrix | SpArray)
or (
isinstance(el, DaskArray)
and isinstance(el._meta, sparse.spmatrix | SpArray)
)
isinstance(el, SpMatrix | SpArray)
or (isinstance(el, DaskArray) and isinstance(el._meta, SpMatrix | SpArray))
for el in els
):
return 0
Expand Down Expand Up @@ -826,7 +823,7 @@ def concat_arrays(arrays, reindexers, axis=0, index=None, fill_value=None):
],
axis=axis,
)
elif any(isinstance(a, sparse.spmatrix | SpArray) for a in arrays):
elif any(isinstance(a, SpMatrix | SpArray) for a in arrays):
sparse_stack = (sparse.vstack, sparse.hstack)[axis]
use_sparse_array = any(issubclass(type(a), SpArray) for a in arrays)
return sparse_stack(
Expand Down
7 changes: 3 additions & 4 deletions src/anndata/_core/raw.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@
from collections.abc import Mapping, Sequence
from typing import ClassVar

from scipy import sparse

from ..compat import SpMatrix
from .aligned_mapping import AxisArraysView
from .anndata import AnnData
from .sparse_dataset import BaseCompressedSparseDataset
Expand All @@ -31,7 +30,7 @@ class Raw:
def __init__(
self,
adata: AnnData,
X: np.ndarray | sparse.spmatrix | None = None,
X: np.ndarray | SpMatrix | None = None,
var: pd.DataFrame | Mapping[str, Sequence] | None = None,
varm: AxisArrays | Mapping[str, np.ndarray] | None = None,
):
Expand Down Expand Up @@ -67,7 +66,7 @@ def _get_X(self, layer=None):
return self.X

@property
def X(self) -> BaseCompressedSparseDataset | np.ndarray | sparse.spmatrix:
def X(self) -> BaseCompressedSparseDataset | np.ndarray | SpMatrix:
# TODO: Handle unsorted array of integer indices for h5py.Datasets
if not self._adata.isbacked:
return self._X
Expand Down
6 changes: 3 additions & 3 deletions src/anndata/_core/sparse_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@

from .. import abc
from .._settings import settings
from ..compat import H5Group, SpArray, ZarrArray, ZarrGroup, _read_attr
from ..compat import H5Group, SpArray, SpMatrix, ZarrArray, ZarrGroup, _read_attr
from .index import _fix_slice_bounds, _subset, unpack_index

if TYPE_CHECKING:
Expand Down Expand Up @@ -329,7 +329,7 @@ def get_memory_class(
if format == fmt:
if use_sparray_in_io and issubclass(memory_class, SpArray):
return memory_class
elif not use_sparray_in_io and issubclass(memory_class, ss.spmatrix):
elif not use_sparray_in_io and issubclass(memory_class, SpMatrix):
return memory_class
msg = f"Format string {format} is not supported."
raise ValueError(msg)
Expand All @@ -342,7 +342,7 @@ def get_backed_class(
if format == fmt:
if use_sparray_in_io and issubclass(backed_class, SpArray):
return backed_class
elif not use_sparray_in_io and issubclass(backed_class, ss.spmatrix):
elif not use_sparray_in_io and issubclass(backed_class, SpMatrix):
return backed_class
msg = f"Format string {format} is not supported."
raise ValueError(msg)
Expand Down
Loading
Loading