Example application for fine-tuning the instruction-tuned 9B variant of Google's Gemma 2 model (google/gemma-2-9b-it) on a new task (Stance Detection).
To make the fine-tuning feasible on a consumer GPU, a parameter-efficient fine-tuning (PEFT) approach based on QLoRA (Quantized Low-Rank Adaptation) is applied. While the original model has 9.295.724.032 parameters, this approach works with only 54.018.048 trainable parameters (~0.581%), leaving the original model weights entirely frozen during the fine-tuning process. This way, the model was fine-tuned for two epochs on the task using a single RTX 4080 GPU.
Each model was fine-tuned on a 5,000 sentences Stance Detection corpus that I manually annotated during my Master's Thesis. Stance Detection aims to classify the stance a sentence takes towards a claim (topic) as either Pro, Contra or Neutral. The sentences originate from Reddit's r/ChangeMyView subreddit in the time span between January 2013 and October 2018, as provided in the ConvoKit subreddit corpus. They cover five topics: abortion, climate change, gun control, minimum wage and veganism. The table below shows some examples.
topic | sentence | stance label |
---|---|---|
There should be more gun control. | It's the only country with a "2nd Amendment", yet 132 countries have a lower murder rate. | Pro |
Humanity needs to combat climate change. | The overhwelming evidence could be lies and you would never know because you're content to live your life as a giant appeal to authority. | Contra |
Vegans are right. | It's all about finding a system that works for you. | Neutral |
For the instruction-based fine-tuning and inference, the sentence pairs are wrapped in the following prompt:
prompt | expected output |
---|---|
The first text is a hypothesis/claim, the second text is a sentence. Determine whether the sentence is a pro argument ("pro"), a contra argument ("con") or doesnt take position at all/is neutral ("neu") towards the hypothesis.
For your answer, just write exactly one of pro, con or neu, not a full text.
|
pro |
Model | Accuracy | Micro-F1 | Macro-F1 |
---|---|---|---|
base model (google/gemma-2-9b-it) | 0.72 | 0.71 | 0.71 |
fine-tuned model (gemma2-9b-it-stance-finetuned) | 0.90 | 0.89 | 0.90 |
pytorch==2.4.0
cudatoolkit=12.1
transformers
datasets
trl
sentencepiece
protobuf
peft
bitsandbytes
openpyxl
scikit-learn
The dataset files in this repository are cut off after the first 50 rows.
The trained model files adapter_model.safetensors
, optimizer.pt
and tokenizer.json
are omitted in this repository.