-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Expand benchmarks for dataset insertion and creation #7236
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
|
On the CI, it reports similar findings: |
headtr1ck
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think these different ways of creating a Dataset can be combined into a matrix test by passing a function that creates your data_vars.
6c7ff52 to
1a66d38
Compare
|
With the right window size it looks like: |
f8d69f6 to
73d5f79
Compare
|
I still think that the DataArray version is "unfair" since you have coords in every dict item, so xarray has to check if they are the same. |
|
What about just specifying "dims"? |
as you though, the numbers improve quite a bit. I kinda want to understand why a no-op takes 1 ms! ^_^ |
6309b9f to
4658836
Compare
Taken from discussions in pydata#7224 (comment) Thank you @Illviljan
4658836 to
0be3712
Compare
Co-authored-by: Illviljan <14371165+Illviljan@users.noreply.github.com>
|
Well now the benchmarks look like they make more sense: |
Co-authored-by: Illviljan <14371165+Illviljan@users.noreply.github.com>
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
|
Thanks @hmaarrfk ! |
Taken from discussions in #7224 (comment)
Thank you @Illviljan
whats-new.rstapi.rst