-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for op_type
"Resize"
#35
base: master
Are you sure you want to change the base?
Conversation
@seanmor5 Is this the right place for the method? :D |
@jnnks Yes! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jnnks This looks good so far! I added some review comments. Also, ONNX provides a lot of built-in node tests if you want to use those to verify the operator behaves as expected! Check deserialize_test
for examples
Co-authored-by: Sean Moriarity <smoriarity.5@gmail.com>
Regarding the tests: Where can I get the |
I believe it comes with ONNX runtime Python package! |
* add mode cubic
I am mostly working to comply with the unit tests now. Couple of questions and comments:
Seems like this will take a bit longer than expected :D |
Is there something I could do to help here? I could use this PR but it seems "frozen". Any major changes in the approach/library? |
@jnnks @NotQuiteLagom Sorry I let this one fall off. There is an open issue upstream to fix Axon's resize operator. I will add it in later as I have a working implementation from another library, then I will come back to this PR for review :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jnnks I just updated Axon's resize implementation. It only works with 2D spatial inputs (so need output shapes with rank 4). These changes should help get you closer.
Note that constant!
forces the input you're giving it to be a constant, so you can always convert it into a flat list. We enforce this at certain places because Nx can't handle dynamic shapes in certain instances, but usually this constraint isn't a big deal.
This does not work unfortunately.
And when using the
|
@seanmor5 I need your wisdom :) |
The values from (I also removed a lot of unimportant features that will be added again later, when the function actually works) |
lib/axon_onnx/deserialize.ex
Outdated
# resize function | ||
fun = fn input, scales, _opts -> | ||
# this will return a Tensor, but we need a tuple | ||
# we are in Defn env now, so Nx.to_flat_list() does not work | ||
output_size = input | ||
|> Nx.shape() | ||
|> Tuple.to_list() | ||
|> Nx.tensor() | ||
|> Nx.multiply(scales) | ||
|> IO.inspect() | ||
|
||
Axon.Layers.resize(input, size: output_size, method: method) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fails, since Axon.Layers.resize
wants to have a Tuple instead of a Tensor (output_size
)
@jnnks If the tests depend on non-constant inputs and raise, then it's okay to ignore them in that case. For a lot of functions we cannot implement the tests because of this dependency on constants, a better test is probably an integration test which involves a model that uses a Resize |
It seems counter intuitive to me to use an industry standard and then deviate from it. Especially since the standard has well defined unit tests this seems strange to me. I'll convert the predefined onnx models to use constant inputs so we can test against those. |
We don't really have a choice, Nx does not support dynamic input shapes and so we need to enforce this constraint. In most cases it ends up being fine, there are only a few edge cases you encounter in practice where dynamic shapes are needed and for the most part they can be mitigated or refactored to meet a static shape requirement |
Looks like 'Axon.resize' does not deliver the results expected by the 'onnx' test case ( Input
Scales
Output
Since the scales are both |
@jnnks I think you might be right, but is there a parameter we're not handling that might affect the behavior here? |
Not as far as I can see, but I will check the spec again for implicit defaults, that may be causing it. |
We have two problems:
To make the operator testable we need to do the following:
[1] Different return valuesinput: #Nx.Tensor<
f32[1][1][2][4]
[
[
[
[1.0, 2.0, 3.0, 4.0],
[5.0, 6.0, 7.0, 8.0]
]
]
]
>
resize with: {"scales",
#Nx.Tensor<
f32[4]
[1.0, 1.0, 0.6000000238418579, 0.6000000238418579]
>}
expected_output: #Nx.Tensor<
f32[1][1][1][2]
[
[
[
[2.6666665077209473, 4.3333330154418945]
]
]
]
>
actual_output: #Nx.Tensor<
f32[1][1][1][2]
[
[
[
[4.199999809265137, 5.466666221618652]
]
]
]
>
[2] input values instead of constants(ArgumentError) unable to build model from ONNX graph, expected value scales to be constant value, but was :input [3] Nx.Tensor to Protobuf
|
WIP Implementation for Resize Operator
fixes #34