-
-
Notifications
You must be signed in to change notification settings - Fork 609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better handling for layers with multiple inputs w/ outputsize #1466
Comments
Should be easy to do. Edit -- The first question here is what's the set of inputs to be allowed. Parallel treats a tuple of arrays the same as multiple arrays, and the Of course Maybe another feature to consider is making |
Try catch would be bad for AD, so I'd punt on it for the time being. |
How would AD be involved? |
The codegen for the forward pass would have to look at the try-catch, which would slow down AD. |
But Lines 98 to 102 in 8dfe4fa
Are you suggesting that there's a use case in which people call AD on that, somehow? |
I was thinking of that as a general construct where this is embedded into a version of Sort of like AutoML, but not quite the same thing |
I'm imagining it as a utility for automated model building and optimisation of the hyper param selection. |
Well, OK, that sounds like a much more ambitious piece of machinery, maybe open an issue? I guess I'm already bending this issue away from its original intent, but
is still about the same function. In which there is no AD. |
Yeah better to leave AD in outputsize for another issue. I was thinking for this issue, it should treat |
Given the variable argument version of
Parallel
(#1462) and #1009, it seems like we need better support for multiple arguments as layer inputs. Currently,outputsize
only understands a singleTuple
forinputsize
.The text was updated successfully, but these errors were encountered: