Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(Question) About glue tasks #52

Open
ZhichaoWang091732 opened this issue Jun 13, 2024 · 4 comments
Open

(Question) About glue tasks #52

ZhichaoWang091732 opened this issue Jun 13, 2024 · 4 comments

Comments

@ZhichaoWang091732
Copy link

Hello, thanks for your inspiring and excellent work!

I want to try full fine-tuning to compare with Galora, and I have blocked the use of Galora. However, I'm having some problems that when I try to run the glue task (i.e. mrpc) to full fine-tune roberta, I find that the eval acc doesn't change at all as the training progresses. I have ruled out a possible overfitting problem and I would like to ask the author or anyone else if there is a relevant solution.

image

@jiaweizzhao
Copy link
Owner

Hi, thanks for your question. Were you using the hyperparameters and settings provided by our paper (appendix)?

@mzf666
Copy link

mzf666 commented Aug 29, 2024

I have the same issue. I have checked the gradient norm and the learning rate are not zero. In the original code, once the metric is initialized, it was not refreshed and kept receiving new predictions across the whole training process. Hence, I manually reload the 'metric' using the 'metric = evaluate.load('glue', args.task_name)'.

However, after fixing this potential bug, it seems that while the eval loss of the finetuned model does changes, the accuracy and f1 score metrics remains the same.

python run_glue_geomlrk.py \
    --model_name_or_path roberta-base \
    --task_name mrpc \
    --max_length 512 \
    --seed=1234 \
    --lora_r 8 \
    --lora_all_modules \
    --per_device_train_batch_size 16 \
    --num_train_epochs 30 \
    --learning_rate 1e-4 \
    --lr_scheduler_type linear \
    --weight_decay 0.1 

image

python run_glue_geomlrk.py \
    --model_name_or_path roberta-base \
    --task_name mrpc \
    --max_length 512 \
    --seed=1234 \
    --lora_r 8 \
    --lora_all_modules \
    --per_device_train_batch_size 16 \
    --num_train_epochs 30 \
    --learning_rate 1e-2 \
    --lr_scheduler_type linear \
    --weight_decay 0.1 

image

@mzf666
Copy link

mzf666 commented Aug 29, 2024

Hello, thanks for your inspiring and excellent work!

I want to try full fine-tuning to compare with Galora, and I have blocked the use of Galora. However, I'm having some problems that when I try to run the glue task (i.e. mrpc) to full fine-tune roberta, I find that the eval acc doesn't change at all as the training progresses. I have ruled out a possible overfitting problem and I would like to ask the author or anyone else if there is a relevant solution.

image

I spent several hours in adjusting the hyperparameters. I found the adamw optimizer does work with suitable learning rate. You can try this launching command:

CUDA_VISIBLE_DEVICES=$cuda_idx python run_glue_geomlrk.py \
    --model_name_or_path roberta-base \
    --task_name mrpc \
    --max_length 512 \
    --seed=1234 \
    --lora_r 4 \
    --per_device_train_batch_size 16 \
    --num_train_epochs 30 \
    --learning_rate 1e-5 \
    --lr_scheduler_type linear \
    --weight_decay 0.1 \

This leads to the result

image

It seems that improper learning rate may drive the model to mode collapse, i.e. assigning the same logits on any input sequence. Thus, the accuracy and F1 score remains unchanged as they are doing a fixed guess.

@MaeChd
Copy link

MaeChd commented Nov 14, 2024

Hello, thanks for your inspiring and excellent work!
I want to try full fine-tuning to compare with Galora, and I have blocked the use of Galora. However, I'm having some problems that when I try to run the glue task (i.e. mrpc) to full fine-tune roberta, I find that the eval acc doesn't change at all as the training progresses. I have ruled out a possible overfitting problem and I would like to ask the author or anyone else if there is a relevant solution.
image

I spent several hours in adjusting the hyperparameters. I found the adamw optimizer does work with suitable learning rate. You can try this launching command:

CUDA_VISIBLE_DEVICES=$cuda_idx python run_glue_geomlrk.py \
    --model_name_or_path roberta-base \
    --task_name mrpc \
    --max_length 512 \
    --seed=1234 \
    --lora_r 4 \
    --per_device_train_batch_size 16 \
    --num_train_epochs 30 \
    --learning_rate 1e-5 \
    --lr_scheduler_type linear \
    --weight_decay 0.1 \

This leads to the result

image

It seems that improper learning rate may drive the model to mode collapse, i.e. assigning the same logits on any input sequence. Thus, the accuracy and F1 score remains unchanged as they are doing a fixed guess.

Hello, thank you for your answer to this question;
However, when I tried to reproduce LoRA and Roberta fine-tuning on the GLUE task, there was a phenomenon that the training loss was almost unchanged and the evaluation index was constant. I wonder if you have any attempts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants