-
Notifications
You must be signed in to change notification settings - Fork 315
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation with agent with finetuned model #240
base: main
Are you sure you want to change the base?
Conversation with agent with finetuned model #240
Conversation
…d local hugging face model and finetune loaded model with hugging face dataset Added features to download models from hugging face model hub/load local hugging face model and finetune loaded model with hugging face dataset. Model loading and fine-tuning can happen both at the initialization stage and after the agent has been initialized (see README in `agentscope/examples/load_finetune_huggingface_model` for details). Major changes to the repo include creating the example script `load_finetune_huggingface_model`, adding a new model wrapper `HuggingFaceWrapper`, and creating a new agent type Finetune_DialogAgent. All changes are done in a new example directory `agentscope/examples/load_finetune_huggingface_model`.
made customized hyperparameters specification available from `model_configs` for fine-tuning at initialization, or through `fine_tune_config` in `Finetune_DialogAgent`'s `fine_tune` method after initialization
fixed issue related to `format` method
updated the dependencies needed
Updated the way to read token from .env file, so that it can work in any example directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure every pushed version is a ready version. Otherwise, mark the PR title with "[WIP]"
examples/conversation_with_agent_with_finetuned_model/finetune_dialogagent.py
Outdated
Show resolved
Hide resolved
optimized the behavior of `device_map` when loading a huggingface model. Now if `device` is not given by the user, `device_map="auto"` by default; otherwise `device_map` is set to the user-specified `device`.
now the user can choose to do full-parameter finetuning by not passing `lora_config`
now the user can choose to do full-parameter finetuning by not passing `lora_config` and `bnb_config`
…etuned_model' into conversation_with_agent_with_finetuned_model
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Please see inline comments.
- As an example, it looks good to me overall. However, we need to further consider the API interfaces before integrating it into the library.
examples/conversation_with_agent_with_finetuned_model/huggingface_model.py
Outdated
Show resolved
Hide resolved
examples/conversation_with_agent_with_finetuned_model/huggingface_model.py
Outdated
Show resolved
Hide resolved
examples/conversation_with_agent_with_finetuned_model/huggingface_model.py
Outdated
Show resolved
Hide resolved
Features include model and tokenizer loading, | ||
and fine-tuning on the lima dataset with adjustable parameters. | ||
""" | ||
# pylint: disable=unused-import |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove the disable here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I remove it, there will be error 'W0611: Unused HuggingFaceWrapper imported from huggingface_model (unused-import)' when running pre-commit; furthermore, removing from huggingface_model import HuggingFaceWrapper
will cause the default model wrapper being used and lead to error. Move HuggingFaceWrapper
to agentscope/src/agentscope/models
might solve this issue though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can I proceed to make HuggingFaceWrapper
part of agentscope/src/agentscope/models
to resolve this issue?
output_texts.append(text) | ||
return output_texts | ||
|
||
def fine_tune_training( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefix the function that not exposed to users with "_"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. But I didn't prefix format
with an underscore as this is the case for all wrappers under agentscope/src/agentscope/models
. Is format
intended to be exposed to the users?
updated according to the latest comments
…ug for continual finetuning.
… `PeftModel` before convert it to `PeftModel`
name: Pull Request
about: Create a pull request
Description
moved
conversation_with_agent_with_finetuned_model
to a separate branch from main to keep it consistent with the official repoChecklist
Please check the following items before code is ready to be reviewed.