Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Testing][Models] Add gpt2 module in testing models #252

Merged
merged 5 commits into from
May 29, 2023

Conversation

yaoyaoding
Copy link
Member

Added gpt2 to hidet.testing.models.gpt2, and implemented the version that supports both initial generation and key-value cache.

Enhancement:

  • Added hook support for compiled model, used to investigate the execution of the compiled mode (mainly for debug).
  • Fixed a bug in memory planner.

@yaoyaoding yaoyaoding force-pushed the gpt2 branch 2 times, most recently from 536592b to 96bac3a Compare May 28, 2023 19:58
@yaoyaoding yaoyaoding merged commit 6d4bd3d into hidet-org:main May 29, 2023
@yaoyaoding yaoyaoding deleted the gpt2 branch May 29, 2023 16:19
vadiklyutiy pushed a commit that referenced this pull request Jul 22, 2024
…pass (#252)

During the graph rewrite, we still keep constant tensors which could be
deleted. For example, in
[TwoMatmulFusion](https://github.com/CentML/hidet/blob/main/python/hidet/graph/transforms/graph_patterns/matmul_patterns.py#L36):
Initially we have:
```
out1 = Matmul(x, c1)
out2 = Matmul(x, c2)
```
After graph rewrite optimizations:
```
c = concat([c1, c2])
m = Matmul(x, c)
out1, out2 = split(m)
```
We can safely remove `c1` and `c2` after computing `c` and set its trace
to None (as if it is a terminal node). However `m` cannot me removed,
thus compilation process with ***hidet*** inevitably consumes some
additional memory. UPD: `m` is a symbolic tensor (because `x` is
symbolic), it does not occupy any memory.

Currently testing this approach, but for some reason, after removing
those constant tensors `resolve_variant_pass` optimization causes all
outputs to be `Nan`. If I exclude `resolve_variant_pass` optimization,
it works

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
vadiklyutiy pushed a commit that referenced this pull request Jul 23, 2024
…pass (#252)

During the graph rewrite, we still keep constant tensors which could be
deleted. For example, in
[TwoMatmulFusion](https://github.com/CentML/hidet/blob/main/python/hidet/graph/transforms/graph_patterns/matmul_patterns.py#L36):
Initially we have:
```
out1 = Matmul(x, c1)
out2 = Matmul(x, c2)
```
After graph rewrite optimizations:
```
c = concat([c1, c2])
m = Matmul(x, c)
out1, out2 = split(m)
```
We can safely remove `c1` and `c2` after computing `c` and set its trace
to None (as if it is a terminal node). However `m` cannot me removed,
thus compilation process with ***hidet*** inevitably consumes some
additional memory. UPD: `m` is a symbolic tensor (because `x` is
symbolic), it does not occupy any memory.

Currently testing this approach, but for some reason, after removing
those constant tensors `resolve_variant_pass` optimization causes all
outputs to be `Nan`. If I exclude `resolve_variant_pass` optimization,
it works

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
vadiklyutiy pushed a commit that referenced this pull request Dec 26, 2024
…pass (#252)

During the graph rewrite, we still keep constant tensors which could be
deleted. For example, in
[TwoMatmulFusion](https://github.com/CentML/hidet/blob/main/python/hidet/graph/transforms/graph_patterns/matmul_patterns.py#L36):
Initially we have:
```
out1 = Matmul(x, c1)
out2 = Matmul(x, c2)
```
After graph rewrite optimizations:
```
c = concat([c1, c2])
m = Matmul(x, c)
out1, out2 = split(m)
```
We can safely remove `c1` and `c2` after computing `c` and set its trace
to None (as if it is a terminal node). However `m` cannot me removed,
thus compilation process with ***hidet*** inevitably consumes some
additional memory. UPD: `m` is a symbolic tensor (because `x` is
symbolic), it does not occupy any memory.

Currently testing this approach, but for some reason, after removing
those constant tensors `resolve_variant_pass` optimization causes all
outputs to be `Nan`. If I exclude `resolve_variant_pass` optimization,
it works

---------

Co-authored-by: Zhumakhan <nazirzhumakhan@gmail,.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant