Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First pass at integrating xP3 #60

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft

Conversation

viraat
Copy link

@viraat viraat commented Apr 14, 2023

- Add scripts to generate task constants for xP3 datasets
- Get started on a task_config generator for xP3:
  - integrates P3 config generator with existing hugging face loader
@google-cla
Copy link

google-cla bot commented Apr 14, 2023

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Copy link
Author

@viraat viraat left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some comments to help @shayne-longpre understand where to pay attention.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file is in progress atm. It's going to need a few more tweaks to get going.

I went with the same approach at P3 (T0) as it seemed to make the most sense to me. If this is the wrong path to go down, happy to discuss.

I separated the file out for hackability. It can go back into one of the task_config files later.

@@ -0,0 +1,11 @@
"""Constants relate to xP3"""

XP3_TRAIN_TASKS_SPLIT = {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is incomplete, I have the script running but it's slow

continue
# Do not process T0 variants with negative examples.
# We still keep the variants of these sets in a different format.
# if "_score_eval" in subtask_id:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, I've only generated the splits for the training datasets.

# We still keep the variants of these sets in a different format.
# if "_score_eval" in subtask_id:
# continue
# elif constants_t0.T0_TRAIN_TASK_METADATA[task_name][
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure how the metadata for the task was filled, this is something I'll need help with.

task_name=subtask_id, task_source="P3")
XP3_TASK_CONFIGS[task_name] = TaskConfig(
source=seqio.TfdsDataSource(
tfds_name=f"huggingface:{ds_name}/{subset_name}",
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From an eye test I think this works: https://www.tensorflow.org/datasets/community_catalog/huggingface seems to have the datasets I checked randomly.

There's some custom version picking in xP3 that will need to be brought over. I plan to put that into the metadata and use it here.

# elif constants_t0.T0_TRAIN_TASK_METADATA[task_name][
# "task_type"] == "t0_question_answer":
# preprocessors = [functools.partial(prep.t0, multiple_choice=False)]
# if constants_t0.T0_TRAIN_TASK_METADATA[task_name]["seq_len"]["max"] == 1:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plan to use something similar and add to the metadata to do the custom preprocessing that happens in xP3. example fn

# pip install -q datasets
import datasets
# git clone -b tr13 https://github.com/Muennighoff/promptsource.git && cd promptsource; pip install -e .
from promptsource.templates import DatasetTemplates
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be happy to get rid of using promptsource here. It's just used to get all the possible prompt templates for a given dataset. I'm assuming FC has that builtin already, but wasn't sure where to look.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

1 participant