Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Has chaoyi-wu/PMC_LLAMA_7B checkppint gone through an SFT or RHLF ? #30

Open
shamanez opened this issue Jan 31, 2024 · 1 comment
Open

Comments

@shamanez
Copy link

It sits sIs the above checkpoint the pre-trained model with unsupervised data and hasn't seen any instruction-tuning datasets?

I have another question: did you use llama2-base as the base model to conduct continual pre-training with research papers and books?

@chaoyi-wu
Copy link
Owner

No, PMC_LLaMA_7B has not undergone any instruction tunining datasets while our latest PMC_LLaMA_13B has been instruction tuned.

We have not tried LLaMA-2 for continual pre-training, since in our evaluation, LLaMA-2, compared with LLaMA, is only enhanced with instruction following ability and in basic knowledge, the gain is limited.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants