Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Operator] Add dynamic shape support and tests for Operators. #274

Merged
merged 3 commits into from
Jun 7, 2023

Conversation

Aalanli
Copy link
Collaborator

@Aalanli Aalanli commented Jun 7, 2023

Altered operators and added the associated test for dynamic shape support.

Changed:

  • conv1d / conv1d_transposed
  • conv2d / conv2d_transposed
  • conv3d / conv3d_transposed
  • batch_matmul
  • matmul_fp16
  • softmax

However, there are certain shape checks that are not possible for dynamic dimensions during compile time, I have skipped over those for now; the shape checks will not happen if its dynamic, but I think its a definite must for the future.

Some difficulties arise comes when resolve rules rely on shapes, short-term fix would be to resolve to default implementation if shape is dynamic, long term would probably be to add conditionals to graph IR, however, this may not play very nicely with fusion...

Similarly, long term fix for operators and task mappings would probably be to add some sort of 'assert' node to the IR, so that assertions gets added to the launch function in codegen.

Allan Lin and others added 3 commits June 6, 2023 16:57
@Aalanli Aalanli changed the title [Operators] Add dynamic shape support and tests for Operators. [Operator] Add dynamic shape support and tests for Operators. Jun 7, 2023
@yaoyaoding
Copy link
Member

Thanks @Aalanli !

@yaoyaoding yaoyaoding merged commit ec23670 into hidet-org:main Jun 7, 2023
@Aalanli Aalanli deleted the dyn-shape branch September 27, 2023 18:09
vadiklyutiy added a commit that referenced this pull request Jul 22, 2024
-  Adopted our scripts to use `mode` from `torch.compile`
- Changed `regroup_modules` from `build_ir_module_batch` to don't create
jobs bigger than `MAX_JOB_PER_WORKER` (fix issue #207)
- A little bit cut/optimize tests for torch backend
  - move densnet121 to slow
  - move resnet50 to slow but add resnet18

(the last 2 points came from attempts to enable `mode='max-autotune'` in
Tests. Still has additional issues)
vadiklyutiy added a commit that referenced this pull request Jul 23, 2024
-  Adopted our scripts to use `mode` from `torch.compile`
- Changed `regroup_modules` from `build_ir_module_batch` to don't create
jobs bigger than `MAX_JOB_PER_WORKER` (fix issue #207)
- A little bit cut/optimize tests for torch backend
  - move densnet121 to slow
  - move resnet50 to slow but add resnet18

(the last 2 points came from attempts to enable `mode='max-autotune'` in
Tests. Still has additional issues)
vadiklyutiy added a commit that referenced this pull request Dec 26, 2024
-  Adopted our scripts to use `mode` from `torch.compile`
- Changed `regroup_modules` from `build_ir_module_batch` to don't create
jobs bigger than `MAX_JOB_PER_WORKER` (fix issue #207)
- A little bit cut/optimize tests for torch backend
  - move densnet121 to slow
  - move resnet50 to slow but add resnet18

(the last 2 points came from attempts to enable `mode='max-autotune'` in
Tests. Still has additional issues)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants