-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parallel evaluation (Cross-Session, Within-Session) #364
Parallel evaluation (Cross-Session, Within-Session) #364
Conversation
…allel-cross-session
I changed the WithinSession also... |
…allel-cross-session
…allel-cross-session
I changed my mind, I want to merge this pull request. It is super helpful to increase the speed, but maybe we need to change the name of the parameters. I was wondering, can you check @sylvchev and @carraraig? |
…allel-cross-session # Conflicts: # moabb/evaluations/evaluations.py
Hi @sylvchev, Can you please review this code? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems correct, but the diff is difficult to read due to the indentation change.
Did you tested this parallel version to ensure that the results are the same when n_jobs and n_jobs_evaluation are > 1?
Yes @sylvchev. Tested and working, same result! In the extreme case, with more than 100 jobs in parallel, we end up losing some jobs. It might be interesting to put some warning about the possible stability, but the joblib already does that. Btw, do you think I should also convert the cross-subjects evaluation? |
Cross-subject eval are very memory agressive and parallelization could require huge amount of memory, but it could be nice to have congruent API. I'll give a pragmatic answer: add the |
I had try and better leave it for later |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is a go! Let's open an issue to remember to add parallel evaluation and n_jobs_evaluation to CrossSubjectEvaluation
100% still determining, draft. Does it make some change in the function to be parallel? I don't know.