Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions #9

Closed
markcmiller86 opened this issue May 20, 2019 · 2 comments
Closed

Some questions #9

markcmiller86 opened this issue May 20, 2019 · 2 comments

Comments

@markcmiller86
Copy link

  • How is this code licensed?
  • How much has changed in Cuda, OpenMP and MPI in the last 5 years that would make re-coding these examples essential to stay so they are up to date with current standards?
@joaomlneto
Copy link
Owner

joaomlneto commented May 21, 2019

Hi @markcmiller86 ! Thanks for the interest 😃

How is this code licensed?

Me and Fotis (the students) are OK licensing this using MIT license (or any other permissive license). However, I have also forwarded the question to the UPC/BSC professors who taught the course:

In case there are no objections, it'll be MIT over the next few days.

How much has changed in CUDA, OpenMP and MPI

  • I haven't used CUDA since, so I can't say for sure. The current code may not even be 100% working, as per Fix CUDA Version #3, and I don't have a CUDA-enabled card to even test it.
  • I believe there is nothing new in OpenMP that significantly affects performance. Maybe using tasks (à là OmpSs)
  • The newer MPI standards may have some newer features that may result in significant improvements, as I believe there are newer ways of reducing synchronization overheads, but I can't say for sure!

I haven't been a big user of these frameworks in the past few years :)

However, I'd like to do basic maintenance on this code to (1) make it current and (2) make it work properly. In case you encounter any issues, please do let us know and we'll try to fix it!

@markcmiller86
Copy link
Author

@joaomlneto thanks for reply and quick response. I downloaded and used the mpi version a bit yesterday. Its great!! I love a resource like this becuase it is kinda sorta the hello world of numerical libraries.

After considering options, however, I've decided I need to work from a different code base towards a similar goal; 1D heat equation implementations for serial, OMP parallel, MPI parallal, Cuda Parallel, and with performance portability layers like Raja, Kokkos, Legion, as well incuding performane counters. This is all part of two parallel efforts to demonstrate and train developers in the use of ECP project Continuous Integration resources and to train early career developers in the use of numerical libraries.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants