You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I know it's mentioned in the readme of the repo that this model apparently can't code because of the spaces that are merged. And this has been discussed in #40 .
However, I did some fine-tuning on the 3B model using the "fixed" tokenizer by @danielhanchenhttps://huggingface.co/danielhanchen/open_llama_3b and with use_fast=True. This tokenizer encodes multiple spaces as multiple space tokens, it doesn't get rid of them as the "official" tokenizer.
My fine-tuning dataset includes very little code, as I wasn't really trying to do that. It's just a small part of the instructions in the instruct datasets I used. But then I noticed this in one output of the model. Lo and behold, perfectly indented python code.
A lot of people out there simply repeating that OpenLLaMA is useless for code, but that doesn't seem to be the case provided the tokenizer configuration is fixed, and a little bit of fine-tuning is done.
The text was updated successfully, but these errors were encountered:
It would be interesting if a LoRA could be trained so that one could just apply the LoRA without needing to fine-tune the model. That LoRA may also be able to be applied to other OpenLLaMA-derived models.
@jorgemcgomes Oh kinda forgot to reply here! @young-geng Congrats on the new release of v2! Trying it out right now :) Can see both the multiple spaces issue is fixed AND the fast tokenizer is fixed in the Huggingface base repo! (the thermal example you provided) Good work!
I know it's mentioned in the readme of the repo that this model apparently can't code because of the spaces that are merged. And this has been discussed in #40 .
However, I did some fine-tuning on the 3B model using the "fixed" tokenizer by @danielhanchen https://huggingface.co/danielhanchen/open_llama_3b and with
use_fast=True
. This tokenizer encodes multiple spaces as multiple space tokens, it doesn't get rid of them as the "official" tokenizer.My fine-tuning dataset includes very little code, as I wasn't really trying to do that. It's just a small part of the instructions in the instruct datasets I used. But then I noticed this in one output of the model. Lo and behold, perfectly indented python code.
A lot of people out there simply repeating that OpenLLaMA is useless for code, but that doesn't seem to be the case provided the tokenizer configuration is fixed, and a little bit of fine-tuning is done.
The text was updated successfully, but these errors were encountered: