-
-
Notifications
You must be signed in to change notification settings - Fork 11.6k
[Core] Asynchronous Output Processor #7049
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
35123ad
a3adaa3
ddaef22
162d3b8
282ade5
c636c15
339216d
1ccb040
043f130
80484a2
36018ae
4614f4c
a5cad38
65dd781
6a7e45a
cff126b
2b6877c
810b0c6
6f2467b
ced7396
0b0421a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -21,7 +21,7 @@ def append_new_token(seq_group, token_id: int): | |
|
|
||
|
|
||
| def schedule_and_update_computed_tokens(scheduler): | ||
| metas, out = scheduler.schedule() | ||
| metas, out, _ = scheduler.schedule() | ||
| for s, meta in zip(out.scheduled_seq_groups, metas): | ||
| s.seq_group.update_num_computed_tokens(meta.token_chunk_size) | ||
| return metas, out | ||
|
|
@@ -180,7 +180,7 @@ def test_maximal_decoding(): | |
| """Verify decoding requests are prioritized.""" | ||
| block_size = 4 | ||
| max_seqs = 2 | ||
| max_model_len = 2 | ||
| max_model_len = 8 | ||
|
||
| max_num_batched_tokens = 2 | ||
| scheduler_config = SchedulerConfig(max_num_batched_tokens, | ||
| max_seqs, | ||
|
|
||
WoosukKwon marked this conversation as resolved.
Show resolved
Hide resolved
|
Uh oh!
There was an error while loading. Please reload this page.