-
Notifications
You must be signed in to change notification settings - Fork 434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes to allow the export of functions with no user input. #8031
base: main
Are you sure you want to change the base?
Changes to allow the export of functions with no user input. #8031
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8031
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New FailuresAs of commit bb07bb0 with merge base 7bc06d1 (): NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hmm I thought we already supported functions with no inputs, but I guess before we only tested with constant returns. Now theres no user return either. cc @tarun292 who I think worked on the no inputs thing before |
# only check if inputs are allocated if there are user inputs: | ||
user_inputs_exist = len(list(filter(lambda input: input.kind == InputKind.USER_INPUT, self.graph_signature.input_specs))) > 0 | ||
|
||
if "placeholder" in check_list and user_inputs_exist: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this change looks fine. On our side we should probably take a pass over all the memory planning infra. A lot of it was written before export graphs were functionalized so the concept of non user inputs didnt make exist.
Like realistically we should /only/ be verifying the memory planning of user inputs here
# this is a bit hacky, but I am not certain of what the contract is for | ||
# node ordering. is the first non-placeholder node guranteed to be the | ||
# first node after input paramters? what if there is no op, and it is | ||
# just placeholders? Easier to just find the last buffer, and insert after. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be fine to leave it as None if there are no user inputs -- then it will insert it as the first placeholder. The official order of inputs is [parameters, buffers, constants, user inputs], but the order of the first 3 doesn't matter as much.
@@ -306,6 +335,9 @@ def lift_constant_tensor_pass(ep): | |||
new_input_specs.extend(lifted_constants) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And if we just insert all the constants at the beginning, we can just directly prepend to input_specs.
@@ -306,6 +335,9 @@ def lift_constant_tensor_pass(ep): | |||
new_input_specs.extend(lifted_constants) | |||
lifted_constants.clear() | |||
new_input_specs.append(s) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a test case?
Summary
This is a set of changes that I have been using to get executorch working with Multiple Entrypoints with shared state.
This is related to: #8030
Issue & Changes
Currently trying to export a function:
Will fail, because in to_edge in exir in the program.py the lifted constants are assumed to be placed before the first user input. This causes weird behavior when there are no user inputs. I changed it to place the lifted constant after the last buffer, if there are no user inputs to place before. However, this relies on my understanding of the implicit layout of the graph inputs.
At the same time later in the after the memory planning phase of to_executorch, it validates that memory planning was correct based on whether the
graph_input_allocated
flag is set, this only applies to user inputs, of which we have none, so it errors out. I added a check to bypass this error if there are no user inputs, but I honestly do not understand enough of the validation check to know if that is appropriate.Comments
In the current executorch with no support for shared state, this case does not make sense, but #8030 is my attempt at adding that capability. and having initialization methods that init the buffers from constants is useful, especially since their initail state is undefined.
Currently this is not ready, I have no tests/ and have my random notes in the commit as comments, and other than validating that it worked as I was working on the shared state export have not done anything in depth....
But I kind-of want feedback if my solution seems correct, or if I am missing something. Particularly regarding my understanding of placeholder ordering and signature logic, and whether bypassing the graph_input_allocated validation is appropriate.
Out of all the changes I had to make for shared state, this is the one I am least sure about.
cc @JacobSzwejbka @angelayi