Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run Beersheba's convolve on GPUs using cupy #886

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

soleti
Copy link

@soleti soleti commented Jul 11, 2024

I added a functionality which allows to run convolve and fftconvolve using Cupy on the GPU, if available.

The performance gain starts to be significant only when the image is large (in my tests, when bin_size < 1, see plot).

Two things I am not sure of:

  • How to test this on machines that don't have a GPU
  • Using a global flag to check if there is a GPU (how are similar cases handled in IC?)

b946c8fa-2d84-436a-8f17-b6a0cbe89e20

@jwaiton
Copy link
Collaborator

jwaiton commented Jul 11, 2024

Hi Stefano!

I'm not sure how IC considers the specifications of the computer in use, but I believe you can avoid using a global flag by using the cupy function cupy.cuda.is_available(), as is described in this thread.

I'm not too familiar with cupy, but I'm assuming it would require a CUDA toolkit to be installed before use, right? So possibly the installation of such a toolkit should be included when building IC, and a check for a CUDA-compatible GPU during building would be helpful to avoid those without a compatible GPU from installing it. I'm not sure how tricky it would be to check this within bash, but I can have a look.

@soleti
Copy link
Author

soleti commented Jul 11, 2024

@jwaiton the global flag is not needed to check if CUDA is available, I am already doing that, it is needed because then inside the code there are a couple of ifs that depend on the presence of CUDA and I wanted to avoid calling cupy.cuda.is_available() every time. Setting a flag cupy_available at installation time is a good idea though, but I don't know how to pass that information to the module then. If you could help with that it would be great!

@jwaiton
Copy link
Collaborator

jwaiton commented Jul 11, 2024

@soleti I think a nice way of doing it would be adding a new check into manage.sh that checks for an installation of CUDA (using something like command -v --nvcc and checking that there is a binary directory output) and then set a BASH variable as a CUDA flag based on this. You can pass the BASH variable through into deconv_functions.py quite easily using os.environ, although I'm not sure if there is some other method for doing this that is standard within IC. I can try and write something up tomorrow to test this.

Edit: checking specifically for a GPU compatible with CUDA is more complicated, I can see a couple of methods but they're platform specific, but I'll keep looking into it.

@soleti
Copy link
Author

soleti commented Jul 12, 2024

Passing an environment variable is a possibility, but I want to double check with IC experts if this is a viable solution or there other guidelines for this use case. @gonzaponte ?

@gonzaponte
Copy link
Collaborator

How to test this on machines that don't have a GPU

To test what exactly?

Using a global flag to check if there is a GPU (how are similar cases handled in IC?)

We try not to rely on global flags in IC. If the reason to use one is:

I wanted to avoid calling cupy.cuda.is_available() every time

I suggest replacing the variable with a function with cached output that checks for that (and can even do imports, etc.).

Also, I would make this an opt-in feature, probably controlled from config file.

@soleti
Copy link
Author

soleti commented Jul 26, 2024

How to test this on machines that don't have a GPU

To test what exactly?

That the GPU code is giving correct results.

Using a global flag to check if there is a GPU (how are similar cases handled in IC?)

We try not to rely on global flags in IC. If the reason to use one is:

I wanted to avoid calling cupy.cuda.is_available() every time

I suggest replacing the variable with a function with cached output that checks for that (and can even do imports, etc.).

I am trying to do this, but I am not sure how to do the import in the cached function and make that module available outside the function. The only way I see is to use global but we probably want to avoid that. Did you mean something like this or did I misunderstand?

@lru_cache
def is_gpu_available() -> bool:
    '''
    Check if GPUs are available, import the necessary libraries and 
    return True if they are available, False otherwise.
    '''
    try:
        import cupy  as cp
    ... 

Maybe making is_gpu_available an inner function of richardson_lucy?

Also, I would make this an opt-in feature, probably controlled from config file.

Yes.

@gonzaponte
Copy link
Collaborator

How to test this on machines that don't have a GPU

To test what exactly?

That the GPU code is giving correct results.

By definition this is not possible, right? We will need to find a way to run the tests in machines with GPUs...

Did you mean something like this or did I misunderstand?

Including the imports in the function was a stupid suggestion from my side, sorry. The function should only check the availability of gpus.

Can the libraries be installed in any machine, even in those without a gpu? If so, the try/except clause in the imports can be omitted if the libraries are included in the IC environment.

Going back to...

I wanted to avoid calling cupy.cuda.is_available() every time

is this slow or are there any other reasons for that? Caching was the solution I proposed because I assumed this was the reason.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants