Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update run_glue for do_predict with local test data (#9442) #9486

Merged
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 25 additions & 10 deletions examples/text-classification/run_glue.py
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ class DataTrainingArguments:
validation_file: Optional[str] = field(
default=None, metadata={"help": "A csv or a json file containing the validation data."}
)
test_file: Optional[str] = field(default=None, metadata={"help": "A csv or a json file containing the test data."})

def __post_init__(self):
if self.task_name is not None:
Expand Down Expand Up @@ -205,16 +206,30 @@ def main():
if data_args.task_name is not None:
# Downloading and loading a dataset from the hub.
datasets = load_dataset("glue", data_args.task_name)
elif data_args.train_file.endswith(".csv"):
# Loading a dataset from local csv files
datasets = load_dataset(
"csv", data_files={"train": data_args.train_file, "validation": data_args.validation_file}
)
else:
# Loading a dataset from local json files
datasets = load_dataset(
"json", data_files={"train": data_args.train_file, "validation": data_args.validation_file}
)
# Loading a dataset from your local files.
# CSV/JSON training and evaluation files are needed.
data_files = {"train": data_args.train_file, "validation": data_args.validation_file}

# Get the test dataset: you can provide your own CSV/JSON test file (see below)
# when you use `do_predict` without specifying a GLUE benchmark task.
if training_args.do_predict:
if data_args.test_file is not None:
extension = data_args.test_file.split(".")[-1]
assert extension in ["csv", "json"], "`test_file` should be a csv or a json file."
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The extension will need to be the same one as for the training and validation file, so we should adapt this assert to test that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reflecting the comments, assert now checks that the test file has the same extension as the train file.
Also, I thought there was no check if the validation file has the same extension as the train file, so I modified that. Is this change OK?

data_files["test"] = data_args.test_file
else:
raise ValueError("Need either a GLUE task or a test file for `do_predict`.")

for key in data_files.keys():
logger.info(f"load a local file for {key}: {data_files[key]}")
Comment on lines +229 to +230
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could info to log, thanks for adding!


if data_args.train_file.endswith(".csv"):
# Loading a dataset from local csv files
datasets = load_dataset("csv", data_files=data_files)
else:
# Loading a dataset from local json files
datasets = load_dataset("json", data_files=data_files)
# See more about loading any type of standard or custom dataset at
# https://huggingface.co/docs/datasets/loading_datasets.html.

Expand Down Expand Up @@ -325,7 +340,7 @@ def preprocess_function(examples):

train_dataset = datasets["train"]
eval_dataset = datasets["validation_matched" if data_args.task_name == "mnli" else "validation"]
if data_args.task_name is not None:
if data_args.task_name is not None or data_args.test_file is not None:
test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"]

# Log a few random samples from the training set:
Expand Down