Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: : ERROR 07-31 11:57:33 async_llm_engine.py:658] Engine iteration timed out. This should never happen! #6969

Closed
Tracked by #5901
lucasjinreal opened this issue Jul 31, 2024 · 3 comments
Labels
bug Something isn't working stale

Comments

@lucasjinreal
Copy link

Your current environment

: ERROR 07-31 11:57:33 async_llm_engine.py:658] Engine iteration timed out. This should never happen!```


### 🐛 Describe the bug

This code not work:

sampling_params = SamplingParams(temperature=0.2, max_tokens=1024, stop=["<|im_start|>", "<|im_end|>"], skip_special_tokens=False)

while True:
    try:
        inp = input(f"{roles[0]}: ")
    except EOFError:
        inp = ""
    if inp == "\\d":
        print("exit...")
        break

    if is_image(inp):
        images = load_multi_images_maybe(inp)
        # clear conv history
        conv.messages = []
        print("Updated image, start new chat session.")
        continue

    print(f"{roles[1]}: ", end="")

    if images is not None:
        # first message
        inp = f"{DEFAULT_IMAGE_TOKEN} " * len(images) + "\n" + inp
        if len(images) > 1:
            inp = convert_image_tags(inp)
        conv.append_message(conv.roles[0], inp)
    else:
        # later messages
        conv.append_message(conv.roles[0], inp)
    conv.append_message(conv.roles[1], None)
    prompt = conv.get_prompt()

    if args.debug:
        print(f"prompt_real: {prompt}")

    if images is not None:
        inputs = {
            "prompt": prompt,
            "multi_modal_data": {
                "image": images[0]
            },
        }
        # next don't need it
        images = None
    else:
        inputs = {
            "prompt": prompt,
        }

    results_generator = model.generate(inputs, sampling_params, str(time.monotonic()))
    outputs = ""
    async for request_output in results_generator:
        prompt = request_output.prompt
        if request_output.finished:
            print()
        else:
            out = request_output.outputs[-1].text
            if len(out) == 0:
                continue
            out_delta = out[len(outputs) :]
            print(out_delta, end="", flush=True)
            outputs = out
@lucasjinreal lucasjinreal added the bug Something isn't working label Jul 31, 2024
@DarkLight1337
Copy link
Member

Please report this issue to #5901 so we can better investigate this.

Copy link

github-actions bot commented Nov 1, 2024

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale label Nov 1, 2024
Copy link

github-actions bot commented Dec 2, 2024

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working stale
Projects
None yet
Development

No branches or pull requests

2 participants