Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for the new reducer interface to dynamic forall #1759

Merged
merged 8 commits into from
Dec 12, 2024

Conversation

artv3
Copy link
Member

@artv3 artv3 commented Oct 31, 2024

Summary

This PR adds support for the new reducer interface to dynamic forall and brings it out of experimental.

@artv3
Copy link
Member Author

artv3 commented Nov 26, 2024

@LLNL/raja-core anyone else want to take a look? or are ready to merge?

@rhornung67
Copy link
Member

@artv3 I'd like to look this over before it is merged.

if(pol > N-1) {
RAJA_ABORT_OR_THROW("Policy enum not supported");
template<typename POLICY_LIST>
struct dynamic_helper<0, POLICY_LIST>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a future pass, we should use parameter pack expansion instead of recursive function calls to do the invoke.

RAJA::expt::dynamic_forall<policy_list>(pol, RAJA::RangeSegment(0, N), [=] RAJA_HOST_DEVICE (int i) {
RAJA::dynamic_forall<policy_list>(pol, range,
RAJA::expt::Reduce<RAJA::operators::plus>(&sum),
RAJA::expt::KernelName("RAJA dynamic forall"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we pull off more of the band-aid and move KernelName out of the expt namespace?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

KernelName functions within the new reducer framework, so I think we need to keep it in expt until the new reducers are fully integrated.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like it should work in kernels without reductions also. Maybe that's a topic for a later discussion.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm using the KernelName with the RAJA caliper integration here: #1773. I would be in favor in getting it out of expt to prompt folks to start using it more confidently.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like it should work in kernels without reductions also. Maybe that's a topic for a later discussion.

It will, but it requires the same machinery as the new-style reductions. If that machinery, or the interface to it, is experimental then this probably should still be too. Otherwise it's rather innocuous.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thats a good point, I agree with @trws. It should stay in expt until we move the new reducer interface out of expt.

Copy link
Member

@rhornung67 rhornung67 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one question from me


int sum = 0;
using VAL_INT_SUM = RAJA::expt::ValOp<int, RAJA::operators::plus>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this? The way the machinery works on the back, it would be just as easy to hand the lambda a reference to an int. Is this to prevent users from using a different operator on the target or something? Either way shouldn't hold this PR up, but it would probably help with the verbosity of these reductions a significant amount.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've recently modified the new-reducer interface to accept only these ValOp objects. This allows the operations within the lambda to be consistent, e.g. valopobj.minloc(array, loc); rather than the macros for loc reductions we were using before. It also allows us to ensure some type and operator safety among the possible reduction objects, e.g. a sum reducer cannot call min(). The ValOp objects take references to the original int or location, so nothing functionally changes for the user outside of the RAJA lambda. Yes, it's more verbose and somewhat ugly, but if nvcc were to allow auto deduction of types in lambda parameters, then we could get rid of some of this verbosity.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I see where you're coming from. On the *loc objects, it makes sense to me to have a helper type for those that has an <op>loc or similar on it, but why not allow the regular reference for the single-component types? It's sometimes useful to do different operations in the body than at the actual reduction. Usually that's for things like getting a per-thread value, like "max iterations computed by one thread" or similar, but it does occasionally come up.

To be explicit, I think the ValOp types are a good idea like the strongly-typed indexes, but it seems like something that should be optional rather than required. Also, I would expect to be able to get it by doing RAJA::expt::Reduce<RAJA::operators::plus, int>::arg_type, or decltype(RAJA::expt::Reduce<RAJA::operators::plus>(&sum))::arg_type, or similar so the declaration for the type used to pass in the reducer provides the appropriate ValOp type.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think these are good topics, I can capture the comments here as an issue for further discussion. But I think the topic here may be out of the scope for this PR?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds right @artv3.

@artv3
Copy link
Member Author

artv3 commented Dec 5, 2024

To double check, Is this ready for merging? It sounds like we are keeping the expt namespace on KernelNaming.

@artv3
Copy link
Member Author

artv3 commented Dec 10, 2024

unfortunately the CI is giving me some trouble

@rhornung67
Copy link
Member

@artv3 I merged my SYCL PR after checks passed and merged develop into this PR which restarted CI checks. Hopefully, it will come back green.

@artv3
Copy link
Member Author

artv3 commented Dec 12, 2024

This omp target test keeps failing:

 test-kernel-basic-single-loop-Segments-OpenMPTarget.exe 

@rhornung67
Copy link
Member

@artv3 this is good to merge now. Restarting the OpenMP target test job on lassen usually resolves the issue. I don't know why this test fails intermittently. Failure is more common in CI than running manually and the failure is not predictable. We could consider disabling that kernel for CI.

@artv3 artv3 merged commit 0e40820 into develop Dec 12, 2024
32 checks passed
@artv3 artv3 deleted the artv3/dynamic-forall-reductions branch December 12, 2024 17:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants