-
Notifications
You must be signed in to change notification settings - Fork 332
Merged InstructionTuning and RerankFinetuning into Finetuning #1603
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
Dependency Review✅ No vulnerabilities or license issues found.Scanned Files |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PR Overview
This PR consolidates InstructionTuning and RerankFinetuning into a unified Finetuning module by updating service names and documentation as well as removing redundant RerankFinetuning files.
- Merged and updated README content to include both instruction tuning and rerank finetuning instructions.
- Renamed deployment sections in Docker Compose documentation to reflect the new Finetuning service.
- Removed deprecated RerankFinetuning documentation files.
Reviewed Changes
| File | Description |
|---|---|
| Finetuning/README.md | Updated title and instructions to cover both tunings. |
| Finetuning/docker_compose/intel/hpu/gaudi/README.md | Renamed service title and updated service description. |
| Finetuning/docker_compose/intel/cpu/xeon/README.md | Renamed service title and updated service description. |
| RerankFinetuning/docker_compose/intel/hpu/gaudi/README.md | Removed redundant deployment documentation. |
| RerankFinetuning/docker_compose/intel/cpu/xeon/README.md | Removed redundant deployment documentation. |
| RerankFinetuning/README.md | Removed redundant documentation. |
Copilot reviewed 12 out of 12 changed files in this pull request and generated 1 comment.
Comments suppressed due to low confidence (2)
Finetuning/docker_compose/intel/hpu/gaudi/README.md:3
- Change 'finetuning Service' to 'Finetuning Service' for consistency with the naming used in other parts of the documentation.
This document outlines the deployment process for a finetuning Service utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice on Intel Gaudi server. The steps include Docker image creation, container deployment. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.
Finetuning/docker_compose/intel/cpu/xeon/README.md:3
- Change 'finetuning Service' to 'Finetuning Service' for naming consistency across documentation.
This document outlines the deployment process for a finetuning Service utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice on Intel Xeon server. The steps include Docker image creation, container deployment. We will publish the Docker images to Docker Hub, it will simplify the deployment process for this service.
| This example includes instruction tuning and rerank model finetuning. Instruction tuning is the process of further training LLMs on a dataset consisting of (instruction, output) pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. Rerank model finetuning is the process of further training rerank model on a dataset for improving its capability on specific field. The implementation of this example deploys a Ray cluster for the task. | ||
|
|
Copilot
AI
Mar 4, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider revising 'on specific field' to 'on a specific field' for improved grammatical accuracy.
| This example includes instruction tuning and rerank model finetuning. Instruction tuning is the process of further training LLMs on a dataset consisting of (instruction, output) pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. Rerank model finetuning is the process of further training rerank model on a dataset for improving its capability on specific field. The implementation of this example deploys a Ray cluster for the task. | |
| This example includes instruction tuning and rerank model finetuning. Instruction tuning is the process of further training LLMs on a dataset consisting of (instruction, output) pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. Rerank model finetuning is the process of further training rerank model on a dataset for improving its capability on a specific field. The implementation of this example deploys a Ray cluster for the task. |
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a discussion on this feature. This PR will not target v1.3.
joshuayao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's hold off on merging this PR to v1.3 for now.
|
This PR is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
|
This PR was closed because it has been stalled for 7 days with no activity. |
|
@XinyuYe-Intel do we still need this PR, if yes, please reopen it. |
Currently we don't. |
Signed-off-by: ZePan110 <ze.pan@intel.com>
Description
Merged InstructionTuning and RerankFinetuning into Finetuning.
Issues
n/a.Type of change
Dependencies
none.
Tests
UT.