-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU acceleration #96
Comments
you can look how sklearn does it with array API. See
https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/discriminant_analysis.py#L108
with the get_namespace function
…On Fri, Aug 9, 2024 at 6:18 PM Drakula44 ***@***.***> wrote:
Hi. I have working on my project when I wanted to use compute_svd_entropy
but it was taking too much time.
I stumbled upon a library cupy that is replacement for numpy for running
on cuda.
I modified locally compute_svd_entropy like this:
def compute_svd_entropy(data, tau=2, emb=10, n_jobs=1):
...
import numpy as np
if n_jobs == "cuda":
import cupy as cp
np = cp
data = cp.asarray(data)
_, sv, _ = np.linalg.svd(_embed(data, d=emb, tau=tau, n_jobs=n_jobs))
m = np.sum(sv, axis=-1)
sv_norm = np.divide(sv, m[:, None])
out = -np.sum(np.multiply(sv_norm, np.log2(sv_norm)), axis=-1)
if n_jobs == "cuda":
out = cp.asnumpy(out)
return out
And for my case it speeded up from 37s per chunk that I was passing to
3-4s per chunk.
I can create PR for that for every function that is applicable will you be
willing to merge something like that. I know that it introduces one more
dependency (it can be optional) so I wanted just to check first before
submitting PR. Thanks in advance.
PS. n_jobs was motivated by the mne that is using n_jobs in that way.
—
Reply to this email directly, view it on GitHub
<#96>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AABHKHF2XOPFUKPBW5HYOG3ZQTTVFAVCNFSM6AAAAABMIYFNPSVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQ2TQMRVHE4DENA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Thank you for suggestion. I looked into it a bit and dont think that there is much currently to do. I still can make PR for this function or if some similar but dont think there is currently need for that? |
the clean way for me would be to rely on sklearn backend system.
a constraint:
- mne-features should still work if you don't have cupy or pytorch.
Message ID: ***@***.***>
… |
Hi. I have working on my project when I wanted to use
compute_svd_entropy
but it was taking too much time.I stumbled upon a library cupy that is replacement for numpy for running on cuda.
I modified locally
compute_svd_entropy
like this:And for my case it speeded up from 37s per chunk that I was passing to 3-4s per chunk.
I can create PR for that for every function that is applicable will you be willing to merge something like that. I know that it introduces one more dependency (it can be optional) so I wanted just to check first before submitting PR. Thanks in advance.
PS.
n_jobs
was motivated by themne
that is using n_jobs in that way.The text was updated successfully, but these errors were encountered: