Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

verl v0.2.1 & v0.3 release checklist #354

Open
3 of 15 tasks
eric-haibin-lin opened this issue Feb 23, 2025 · 13 comments
Open
3 of 15 tasks

verl v0.2.1 & v0.3 release checklist #354

eric-haibin-lin opened this issue Feb 23, 2025 · 13 comments
Labels

Comments

@eric-haibin-lin
Copy link
Collaborator

eric-haibin-lin commented Feb 23, 2025

v0.2.1

v0.3

feel free to propose features (contributions are welcome!)

@BearBiscuit05
Copy link
Contributor

What can I help about the 'tool calling examples' part?

@eric-haibin-lin
Copy link
Collaborator Author

What can I help about the 'tool calling examples' part?

related to:
#344
#340

under the hood chat calls generate, so the design is supposed to work. just need to provide a working/stable example

@liyu199809
Copy link

Will megatron context parallelism be supported in the future?

@vermouth1992
Copy link
Collaborator

Will megatron context parallelism be supported in the future?

Yes. We will use mcore that supports cp by default.

@casper-hansen
Copy link

@BearBiscuit05 See #344, I outlined the main challenge. I think it should be relatively straightforward if veRL can start using chat or vLLM directly adds support for tool calling in generate.

I imagine we can have GRPO-trained reasoners in the future that learns when to use tools as part of their <think> tags, e.g. to execute code for a feedback loop or retrieve additional information.

@vermouth1992
Copy link
Collaborator

@BearBiscuit05 See #344, I outlined the main challenge. I think it should be relatively straightforward if veRL can start using chat or vLLM directly adds support for tool calling in generate.

I imagine we can have GRPO-trained reasoners in the future that learns when to use tools as part of their <think> tags, e.g. to execute code for a feedback loop or retrieve additional information.

I talked to vllm maintainer yesterday. It seems that there should be no blocking if we switch from generate to chat. Do you mind give it a try to call chat using SPMD style offline inference?

@BearBiscuit05
Copy link
Contributor

Not very familiar with inference, but I think I’m starting to get the hang of it. Does this mean I need to build a new chat function and add extra params that include tool calls to invoke generate? Or should I just replace generate directly with the chat function from vllm?

@casper-hansen
Copy link

You should be able to replace generate directly with chat. The only problem is that we currently pass tokenized inputs into generate where as chat expects List[ChatCompletionContentPartTextParam] or List[List[ChatCompletionContentPartTextParam]]. Not sure what the best design would be in this case.

Case 1: Detokenize the tokenized inputs we use for generate.
Case 2: Change veRL to not tokenize datasets before-hand (relatively big change)

class ChatCompletionContentPartTextParam(TypedDict, total=False):
    text: Required[str]
    """The text content."""

    type: Required[Literal["text"]]
    """The type of the content part."""

@vermouth1992
Copy link
Collaborator

vermouth1992 commented Feb 24, 2025

The second choice would incur significant overhead when tokenizing on-the-fly (typically 2x slowdown in generation, which is basically unacceptable). I guess we will need to seek solution for case 1

@BearBiscuit05
Copy link
Contributor

Got it. I'll give it a try.

@liyu199809
Copy link

未来会支持 megatron 上下文并行吗?

是的。我们将默认使用支持cp的mcore。

It seems that the context parallelism in the model part has not been implemented yet. Is this function currently available?

@BearBiscuit05
Copy link
Contributor

未来会支持 megatron 上下文并行吗?

是的。我们将默认使用支持cp的mcore。

It seems that the context parallelism in the model part has not been implemented yet. Is this function currently available?

Not right now, but if you check this roadmap, once verl upgrades MCore, cp will be support.

@casper-hansen
Copy link

Is it possible to optimize startup time? I noticed when using veRL, it is significantly slower to launch a job than when using Huggingface TRL
#384

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants