Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

evaluate_transformer_performance crashes if any profiling method crashes #377

Open
amontanez24 opened this issue Jan 27, 2022 · 0 comments
Labels
bug Something isn't working internal The issue doesn't change the API or functionality

Comments

@amontanez24
Copy link
Contributor

Problem Description

Currently the evaluate_transformer_performance method will profile the transformer provided against the dataset generator provided for three different sizes. It does this by running profile_transformer which runs fit, transform and reverse_transform for the transformer and times these methods as well as captures their peak memory. The problem is that if any of these individual profiling methods fails, the whole evaluate_transformer_performance method errors out. This means that if the profiling was able to complete for some of it, it ends up being lost.

Expected behavior

We should catch any errors thrown from profiling the memory or time for any method and just replace that with np.nan, so that the rest of the performance values can still be collected.

@amontanez24 amontanez24 added the bug Something isn't working label Jan 27, 2022
@npatki npatki added the internal The issue doesn't change the API or functionality label Jun 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working internal The issue doesn't change the API or functionality
Projects
None yet
Development

No branches or pull requests

2 participants