Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Throw error if CUDA-aware MPI is not found for distributed GPU architecture #3883

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

glwagner
Copy link
Member

Copy link
Collaborator

@simone-silvestri simone-silvestri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice

@glwagner
Copy link
Member Author

Hmm, we need to take care because apparently only openmpi allows us to check this:

help?> MPI.has_cuda
  MPI.has_cuda()

  Check if the MPI implementation is known to have CUDA support. Currently only Open MPI provides a mechanism to check, so it will return false with other implementations (unless overridden). For "IBMSpectrumMPI" it will
  return true.

  This can be overridden by setting the JULIA_MPI_HAS_CUDA environment variable to true or false.

  │ Note
  │
  │  For OpenMPI or OpenMPI-based implementations you first need to call Init().

  See also MPI.has_rocm for ROCm support.

@glwagner
Copy link
Member Author

Revisiting this briefly --- we have found that has_cuda only tests Open MPI. So we probably don't want this PR as-is.

However, there are other possible solutions. For example we can write a test like

using MPI
using CUDA
MPI.Init()

function sendrecv_works(grid)
    arch = architecture(grid)
    comm = arch.communicator
    rank = arch.local_rank
    size = MPI.Comm_size(comm)
    dst = mod(rank+1, size)
    src = mod(rank-1, size)
    N = 4
    FT = eltype(grid)
    send_mesg = CuArray{FT}(undef, N)
    recv_mesg = CuArray{FT}(undef, N)
    fill!(send_mesg, FT(rank))
    CUDA.synchronize()
    try
        MPI.Sendrecv!(send_mesg, dst, 0, recv_mesg, src, 0, comm)
        return true
    catch err
        @warn "MPI.Sendrecv test failed." exception=(err, catch_backtrace())
        return false
    end
end

adapted from https://gist.github.com/luraess/0063e90cb08eb2208b7fe204bbd90ed2

@glwagner
Copy link
Member Author

We're also discussing whether such a helper could be added to MPI here: JuliaParallel/MPI.jl#886

@glwagner
Copy link
Member Author

We may want a similar and independent test for Allreduce based on the discussion at

CliMA/ClimaOcean.jl#225

@glwagner
Copy link
Member Author

Also because of the intricacies of configuring CUDA-aware MPI (this can fail due to issues out of your control, for example due to incorrect installation of libraries on a cluster which I experienced recently) we should give more information on failure --- MPI configuration, architecture details, etc.

@simone-silvestri
Copy link
Collaborator

simone-silvestri commented Nov 12, 2024

I wonder if they have solved this problem in https://github.com/CliMA/ClimaComms.jl

We could think of supporting also non-CUDA-aware MPI by switching the buffers to the CPU and performing a copy to the CPU before the send/recv operations, but that would be a big step because then we would have to support MPI routines everywhere all the time, not only for fill_halo_regions! (for example also in the distributed Fourier transforms or in the all reduce operations)

@glwagner
Copy link
Member Author

I wonder if they have solved this problem in https://github.com/CliMA/ClimaComms.jl

Why do you say that?

We could think of supporting also non-CUDA-aware MPI by switching the buffers to the CPU and performing a copy to the CPU before the send/recv operations, but that would be a big step because then we would have to support MPI routines everywhere all the time, not only for fill_halo_regions! (for example also in the distributed Fourier transforms or in the all reduce operations)

Is there a point? Would such models be useful / run efficiently?

@simone-silvestri
Copy link
Collaborator

Why do you say that?

I think they have non-gpu aware support.

Is there a point? Would such models be useful / run efficiently?

There would be some overhead, and hiding communication latency would be more difficult.
Probably not the way forward.

@glwagner
Copy link
Member Author

Why do you say that?

I think they have non-gpu aware support.

Why would non-GPU aware support mean that ClimaComms throws useful errors when CUDA-aware MPI is not available? I don't totally grasp. Anyways if they do have nice error messages let's use those utilities.

There would be some overhead

The question is really about how much overhead there would be. Would it be a small thing, or would it be so large that there would be no point in even trying to run such a simulation.

The other question is, are there systems where CUDA-aware MPI cannot be installed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants