-
-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when running the example code for GPU-acceleration using diffeqpy #138
Comments
I am also facing the same problem! |
Updates for the changes in ModelingToolkit v9 Fixes #138
Fixed in the new release. Let me know if there are any remaining issues. |
Hi Chris, thanks for your fix! Now the function de.fit32( ) works perfectly, but when using the solver via GPU I still get the following error:
Followed by many more stacktrace messages that I can share if needed... Not sure if @jaswin90 can reproduce the same issue as well? |
@utkarsh530 are you able to reproduce this with just Julia? |
https://github.com/SciML/diffeqpy?tab=readme-ov-file#benchmark The Julia example here works for me. |
@ChrisRackauckas I meant here the code works fine, and I cannot reproduce the error 😅 |
@gonzalovivares does the Julia code work for you? Maybe it's an installation issue or just using an older version. |
@ChrisRackauckas Yes, running the examples of DiffEqGPU using Julia (via VScode) works fine for me as well. I created a new project and Python environment to reinstall diffeqpy and discard an installation issue (diffEqPy 2.4.1 and Julia 1.10.2), but the error in Python when calling the cuda.EnsembleGPUKernel still shows up... Not sure if any other users are also getting this error in Python... I have found discussions from other packages reporting the same error https://discourse.julialang.org/t/using-mapreduce-on-gpu-with-cuda-jl/108023 |
I'm a bit confused at this point. What's the remaining error? |
My problem is that when running the documentation example code of diffeqpy using the GPU in Python (showed above at the beginning of the issue), I encounter the following error (I am using Python 3.12 and Julia 1.10.2, and the GPU works with other codes in both Python and Julia):
And many more stacktrace messages that I can share if needed. |
Hi all, first of all thanks for the development of diffeqpy, since I use it my scientific modelling work has boosted (I'm simply a user of diffeqpy with limited knowledge about software engineering).
I have been planning to upgrade my code in order to use the GPU of my laptop (NVIDIA RTX-5500), by using CUDA to reduce the simulation time of my models. However, when running the example code given in the repository for the use of diffeqpy.cuda, I encountered two run errors. First, the de.jit32() function gave me the following error:
The code I run was:
And the first error I encountered:
I read a previous threat about a similar error, and the conclusion was that de.jit() might not be useful in some cases, but not sure if it would also apply to this example, or if there is something I might need to do on my computer (I'm missing any installation in Julia, I need a conda interpreter rather than a virtualenv in Pycharm...?).
Anyway, I skipped this error by using the normal de.ODEProblem to see if the solver using GPU could work, and then I got this other error, related to the cuda sub-package:
And a long list of Stacktraces that I could share if you need it.
I am wondering if there is a solution to this problem. I have been able to succesfully run the examples of the diffeqGPU.jl in Julia, so the GPU seems to work so far, but if there is no solution for me using Python, I might need to rewrite my models from Python to Julia (and learn Julia btw) to get advantage of my GPU, what do you think?
Thanks a lot.
The text was updated successfully, but these errors were encountered: