Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add macro to create custom Ops also on aarch64 #871

Merged
merged 23 commits into from
Sep 24, 2024
Merged

Conversation

vchuravy
Copy link
Member

@vchuravy vchuravy commented Sep 3, 2024

No description provided.

@giordano
Copy link
Member

giordano commented Sep 3, 2024

Need to delete

Sys.ARCH !== :powerpc64le &&
Sys.ARCH !== :ppc64le &&
Sys.ARCH !== :aarch64 &&
!startswith(string(Sys.ARCH), "arm")
to actually test this?

@vchuravy
Copy link
Member Author

vchuravy commented Sep 3, 2024

No since I add it to this as well

operators = [MPI.SUM, +, my_reduce]

@vchuravy vchuravy marked this pull request as ready for review September 4, 2024 14:09
Copy link
Contributor

@eschnett eschnett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only looked at the code and didn't try to run it.

src/operators.jl Show resolved Hide resolved
src/operators.jl Outdated Show resolved Hide resolved
Copy link
Member

@giordano giordano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to understand: this works around the closures limitations by creating a custom module on-the-fly and defining the closure "statically" inside it?

Side note, it's a bit unfortunate that this is introducing an alternative method to Op to define custom operators: generic libraries will have to pick either @Op or Op (presumably the former if they want to work on all platforms), but I guess we don't have more comprehensive solutions at the moment.

src/operators.jl Outdated Show resolved Hide resolved
src/operators.jl Outdated Show resolved Hide resolved
src/operators.jl Outdated Show resolved Hide resolved
src/operators.jl Show resolved Hide resolved
src/operators.jl Outdated Show resolved Hide resolved
@giordano
Copy link
Member

giordano commented Sep 5, 2024

Also, should we add a reference to @Op in the error messages at

@static if MPI_LIBRARY == "MicrosoftMPI" && Sys.WORD_SIZE == 32
"[...] As an alternative, use the MPI.@Op macro...

@vchuravy
Copy link
Member Author

vchuravy commented Sep 5, 2024

, it's a bit unfortunate that this is introducing an alternative method to Op to define custom operators: generic libraries will have to pick either @op or Op (presumably the former if they want to work on all platforms), but I guess we don't have more comprehensive solutions at the moment.

The situation is even funnier. Op tries to handle all the things generally, so it takes a function f and doesn't know whether it is a closure or not. @Op "just" creates a method of Op that dispatches on the function f and tada...

So it doesn't introduce an alternative method, you still have to call Op it just "pre-registers" f.

Co-authored-by: Mosè Giordano <765740+giordano@users.noreply.github.com>
src/operators.jl Outdated Show resolved Hide resolved
@giordano
Copy link
Member

I like the new name much better, it makes the difference clearer, thanks!

src/operators.jl Outdated Show resolved Hide resolved
test/Project.toml Outdated Show resolved Hide resolved
src/operators.jl Outdated Show resolved Hide resolved
@giordano
Copy link
Member

src/operators.jl Outdated Show resolved Hide resolved
@giordano
Copy link
Member

giordano commented Sep 19, 2024

With

diff --git a/docs/examples/03-reduce.jl b/docs/examples/03-reduce.jl
index 86d22be1..26aa9658 100644
--- a/docs/examples/03-reduce.jl
+++ b/docs/examples/03-reduce.jl
@@ -33,8 +33,13 @@ end
 
 X = randn(10,3) .* [1,3,7]'
 
+# Register the custom reduction operator.  This is necessary only on platforms
+# where Julia doesn't support closures as cfunctions (e.g. ARM), but can be used
+# on all platforms for consistency.
+MPI.@RegisterOp(pool, SummaryStat)
+
 # Perform a scalar reduction
-summ = MPI.Reduce(SummaryStat(X), pool, root, comm)
+summ = MPI.Reduce(SummaryStat(X), pool, comm; root)
 
 if MPI.Comm_rank(comm) == root
     @show summ.var
@@ -42,7 +47,7 @@ end
 
 # Perform a vector reduction:
 # the reduction operator is applied elementwise
-col_summ = MPI.Reduce(mapslices(SummaryStat,X,dims=1), pool, root, comm)
+col_summ = MPI.Reduce(mapslices(SummaryStat,X,dims=1), pool, comm; root)
 
 if MPI.Comm_rank(comm) == root
     col_var = map(summ -> summ.var, col_summ)

I can run the custom reduction example in the documentation on aarch64-darwin, it errors in

error("User-defined reduction operators are currently not supported on non-Intel architectures.\nSee https://github.com/JuliaParallel/MPI.jl/issues/404 for more details.")
without this change. I think we may want to suggest users to look at MPI.@RegisterOp in that error message.

src/operators.jl Outdated Show resolved Hide resolved
@giordano
Copy link
Member

It turns out staticarrays is used? I'm missing where though 😕

@giordano
Copy link
Member

Last bit: update the docsaabout known issues #871 (comment) and then we should be good to go for me!

@vchuravy
Copy link
Member Author

@giordano done!

Copy link
Member

@giordano giordano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much, this is great!

@vchuravy vchuravy merged commit c91d5a7 into master Sep 24, 2024
46 of 50 checks passed
@vchuravy vchuravy deleted the vc/custom_ops branch September 24, 2024 12:15
@simonbyrne
Copy link
Member

Could we make use of the potential PerProcess construct here (JuliaLang/julia#55793)?

@vchuravy
Copy link
Member Author

Could we make use of the potential PerProcess construct

Yeah, I think that could be used to generally simplify the implementation, and also our home-grown implementation for other globals.

@giordano giordano mentioned this pull request Sep 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants