Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jax.numpy reductions: avoid upcast of f16 when dtype is specified by user #26403

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jakevdp
Copy link
Collaborator

@jakevdp jakevdp commented Feb 7, 2025

Fixes #26365

@jakevdp jakevdp self-assigned this Feb 7, 2025
@jakevdp jakevdp marked this pull request as draft February 7, 2025 19:24
@jakevdp jakevdp added the pull ready Ready for copybara import and testing label Feb 7, 2025
@jakevdp jakevdp requested a review from superbobry February 11, 2025 22:41
@jakevdp jakevdp marked this pull request as ready for review February 11, 2025 22:41
Copy link
Collaborator

@superbobry superbobry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but the test failure on CPU looks relevant.

@@ -231,7 +231,7 @@ def _reduce_sum(a: ArrayLike, axis: Axis = None, dtype: DTypeLike | None = None,
initial: ArrayLike | None = None, where: ArrayLike | None = None,
promote_integers: bool = True) -> Array:
return _reduction(a, "sum", lax.add, 0, preproc=_cast_to_numeric,
bool_op=lax.bitwise_or, upcast_f16_for_computation=True,
bool_op=lax.bitwise_or, upcast_f16_for_computation=(dtype is None),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: you can drop parens around dtype is None here and elsewhere.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pull ready Ready for copybara import and testing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

jnp.mean(x, dtype=bfloat16) is not respected
3 participants