-
Notifications
You must be signed in to change notification settings - Fork 811
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding validation splits to (experimental) text_classification datasets that do not have vocabulary built over them #690
Comments
I'm pretty sure all the |
Yes, the vocabulary is always built over the training set. The issue comes when you split the training set into training and validation sets. Your validation set has been numericalized from a vocabulary that has already “seen” all of these validation examples when they were part of the training set. This means information leaks from the training set into the validation set, giving inflated validation scores. The validation set should not be taken from the test set. When comparing results on a dataset everyone should be using the exact same test set. Creating a validation set from the test set violates this, is extremely bad practice and causes information to leak from the test set into the validation set. |
I agree with your point that splitting train dataset will leak the information in vocab and result in an inflated validation scores. I agree people ideally split train data into train/valid subset though. However, I don't think splitting test set here is "extremely bad practice" :). The purpose to have test set, IMO, is to have a separate dataset that is never touched during training process. To this point, if we split test dataset into test/valid sub-datasets, we use valid sub-dataset through epochs but never touch test sub-datasets. To that sense, we just have a smaller data set for testing in the end. The problem for this method is that we never change the valid data set through epochs. This is true for the word language modeling dataset (like Wikitext2), where we have fixed valid data set. |
I disagree and believe it is bad practice. If you release a paper with results showing X% accuracy over the test set, the only way I can compare a new method is if I use the exact same test set as you. If you have used 100% of the test set to calculate your test accuracy and I have used 80% of the test set (as I've used 20% of it for my validation set), then these results are incomparable as we haven't used the exact same test sets. Test sets should never be touched, in any way, including splitting them to form validation sets. |
I agree that the test set should never be touched. However, I do not agree that we should introduce an option that generates a validation dataset, if such validation dataset has not been defined by the dataset creators. The point here is provide a reference that will yield the train and test datasets as described by the dataset creators. Some people then maybe split train into a training and validation dataset with a 80/20 split. Or maybe they'll use a 70/30 split. Or maybe they'll have a fixed seed to pull a "random" subset from the training dataset, etc. So, if someone wants to do this train / validation split, and they surely will, we should have an abstraction that makes it easy to do this, but we should not make a choice here by default. Then we diverge from the idea that this dataset implementation should do one thing well, which is provide a reference for the dataset as described by the dataset creators. |
One way of dealing with this would be to modify text classification to return the raw text instead of building a vocab if it doesn't exist. That way you'd get a training and testing dataset that yields the lines of text (in UTF-8) format, which could then be fed into a vocab factory. We could, for now, add a flag that will cause the raw text to be returned and then later on decide whether we want to make that the default. |
I would prefer this over my proposed solution. |
fixed in #701 |
🚀 Feature
The experimental text_classification datasets should have a way to build a validation set from them, without the vocabulary being built over the validation set.
Motivation
In ML, you should always have a test, validation and training set. In NLP, your vocabulary should be built from the training set only, and not from the test/validation.
The current experimental text classification (IMDB) dataset does not have a validation set and automatically builds the vocabulary whilst loading the train/test sets. After loading the train and test sets, we would need to construct a validation set with
torch.utils.data.random_split
. The issue here is that our vocabulary has already been built over the validation set we are about to create. There is currently no way to solve this issue.Pitch
'valid'
should be accepted as adata_select
argument and should create a validation set before the vocabulary has been created over the training set. As the IMDB dataset does not have a standardized validation split, we can do something like taking the last 20% of the training set.I am proposing something like the following after the
iters_group
is created here:tee
duplicates generators andislice
slices into generators. We need to duplicate the training data iterator as we will be using it three times. We use the first iterator to get the length of the training set so we know what size the validation set will be (the last 20% of the examples from the training set). We then useislice
to get the last 20% of the training examples to form the validation set, the first 80% of the training examples to use as the new training set, and the first 80% examples of the "vocab" set as this needs to match the training set as it is what we want to build our vocab from.We can now correctly load a train, valid and test set with vocabulary built only over the training set:
Can also load a custom vocabulary built from the original vocabulary like so (note that 'valid' needs to be in the
data_select
when building the original vocabulary):Happy to make the PR if this is given the go-ahead.
The text was updated successfully, but these errors were encountered: