-
-
Notifications
You must be signed in to change notification settings - Fork 128
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: working on benchmarks with Finch
as backend
#772
base: main
Are you sure you want to change the base?
Conversation
for more information, see https://pre-commit.ci
CodSpeed Performance ReportMerging #772 will not alter performanceComparing Summary
Benchmarks breakdown
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @DeaMariaLeon! A few suggestions.
It seems the |
m, n, p = sides | ||
|
||
if m * n >= max_size or n * p >= max_size: | ||
if m * n >= max_size or n * p >= max_size or m * n <= min_size or n * p <= min_size: | ||
pytest.skip() | ||
|
||
rng = np.random.default_rng(seed=seed) | ||
x = sparse.random((m, n), density=DENSITY, format=format, random_state=rng) | ||
y = sparse.random((n, p), density=DENSITY, format=format, random_state=rng) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the tests on file test_benchmark_coo.py
were meant to fail with Finch
.
Finch
sparse.random
(used to build x
& y
, line 25 and 26) is different, because it doesn't have the format
argument that is used with Numba
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mtsokol How hard would it be to add this to finch
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can probably use sparse.asarray(..., format=<format>)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we just need a format=nothing default and then a reformatting line inside finch.random?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That'd suffice -- if the format isn't what the format
arg demands, reformat.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have a way of specifying format currently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have a way of specifying format currently?
asarray
supports a format arg: https://github.com/willow-ahrens/finch-tensor/blob/25d5de0c6b0c75120a06c0b1c2ec1568216c71f8/src/finch/tensor.py#L647
if hasattr(sparse, "compiled"): | ||
operator.matmul = sparse.compiled(operator.matmul) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if hasattr(sparse, "compiled"): | |
operator.matmul = sparse.compiled(operator.matmul) | |
f = operator.matmul | |
if hasattr(sparse, "compiled"): | |
f = sparse.compiled(f) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if hasattr(sparse, "compiled"): | |
operator.matmul = sparse.compiled(operator.matmul) | |
def f(x, y): | |
return x @ y | |
if hasattr(sparse, "compiled"): | |
f = sparse.compiled(f) |
pytest.skip() | ||
|
||
rng = np.random.default_rng(seed=seed) | ||
x = sparse.random((m, n), density=DENSITY, format=format, random_state=rng) | ||
y = sparse.random((n, p), density=DENSITY, format=format, random_state=rng) | ||
|
||
if hasattr(sparse, "compiled"): | ||
operator.matmul = sparse.compiled(operator.matmul) | ||
|
||
x @ y # Numba compilation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x @ y # Numba compilation | |
f(x, y) # Compilation |
m, n, p = sides | ||
|
||
if m * n >= max_size or n * p >= max_size: | ||
if m * n >= max_size or n * p >= max_size or m * n <= min_size or n * p <= min_size: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if m * n >= max_size or n * p >= max_size or m * n <= min_size or n * p <= min_size: | |
if m * n >= max_size or n * p >= max_size or m * n * DENSITY <= min_size or n * p * DENSITY <= min_size: |
|
What type of PR is this? (check all applicable)
Related issues
Checklist
Please explain your changes below.
Only added
Finch
onconfig.py
- (locally I see the names of the backend. But I'm not sure if this renders fine on CodSpeed).Also, per Hameer instructions I need to see which tests fail with
Finch
.I would like to see if the change on
codspeed.yml
works like expected.Finally, I have only added the line
if hasattr(sparse, "compiled"): f = sparse.compiled(f)
to the first two benchmarks to make sure I correctly understood.