Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Driver] Make compilation more compatible with multi-processing #350 #351

Merged
merged 6 commits into from
Aug 24, 2023

Conversation

xinli-git
Copy link
Collaborator

@xinli-git xinli-git commented Aug 21, 2023

This change adds a filelock to task compilation so that workflows such as distributed inference only builds the task once and avoids any potential data (file) races.

Currently, only task building is included in the filelock because in general, compiled graphs will be different and compiled modules is already protected by task building.

yaoyaoding and others added 6 commits August 5, 2023 00:07
…del support (hidet-org#347)

1. Enhance support for `__setitem__` and` __getitem__` of Tensor; Add
SetStridedSlice Op, Roll Op.
2. Add/Update torch mapping for adaptive_avg_pool3d, eq, pad, roll,
matmul, new_zeros, batch_norm, MultiHeadAttention.
3. Update torch Linear mapping to optionally accept transposed weights.
4. Fix a bug where a empty graph will output a zero tensor instead of
the input/weight.
…hidet-org#345)

Encountered a few minor issues when compiling a transformer-based model
using torch.compile with very large batch sizes, submitting the fix here.
This is a continuation of hidet-org#347.

1. Add LP normalization task (ToDo: schedule template)
2. Add torch mappings for normalize, clone, zero_, exp, chunk
3. Add ceil_mode=True support for pool2d
4. Fix dtype issue in resize
5. Fix other bugs in pad, conv2d_pattern
Add an ad-hoc implementation of einsum based on pattern matching. Only
supports batched matmul.
@xinli-git
Copy link
Collaborator Author

Hi @soodoshll maybe a quick review ?

@soodoshll
Copy link
Collaborator

@xinli-git LGTM!

@xinli-git xinli-git changed the base branch from main to auto-parallel August 24, 2023 18:17
@xinli-git xinli-git merged commit ab5b738 into hidet-org:auto-parallel Aug 24, 2023
@xinli-git xinli-git deleted the concurrent_task_build branch August 24, 2023 18:18
vadiklyutiy pushed a commit that referenced this pull request Jul 22, 2024
… for conv-bert-base model (#351)

Added support for `torch.multiply` and `torch.nn.functional.unfold`
These ops are needed in `conv-bert-base` models

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
vadiklyutiy pushed a commit that referenced this pull request Jul 23, 2024
… for conv-bert-base model (#351)

Added support for `torch.multiply` and `torch.nn.functional.unfold`
These ops are needed in `conv-bert-base` models

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
vadiklyutiy pushed a commit that referenced this pull request Dec 26, 2024
… for conv-bert-base model (#351)

Added support for `torch.multiply` and `torch.nn.functional.unfold`
These ops are needed in `conv-bert-base` models

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants