Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closes #59 - Add CEI dataset #530

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

napsternxg
Copy link

@napsternxg napsternxg commented Apr 30, 2022

Fixes #59 - Add CEI dataset

Please name your PR after the issue it closes. You can use the following line: "Closes #ISSUE-NUMBER" where you replace the ISSUE-NUMBER with the one corresponding to your dataset.

If the following information is NOT present in the issue, please populate:

Checkbox

  • Confirm that this PR is linked to the dataset issue.
  • Create the dataloader script biodatasets/my_dataset/my_dataset.py (please use only lowercase and underscore for dataset naming).
  • Provide values for the _CITATION, _DATASETNAME, _DESCRIPTION, _HOMEPAGE, _LICENSE, _URLs, _SUPPORTED_TASKS, _SOURCE_VERSION, and _BIGBIO_VERSION variables.
  • Implement _info(), _split_generators() and _generate_examples() in dataloader script.
  • Make sure that the BUILDER_CONFIGS class attribute is a list with at least one BigBioConfig for the source schema and one for a bigbio schema.
  • Confirm dataloader script works with datasets.load_dataset function.
  • Confirm that your dataloader script passes the test suite run with python -m tests.test_bigbio biodatasets/my_dataset/my_dataset.py.
  • If my dataset is local, I have provided an output of the unit-tests in the PR (please copy paste). This is OPTIONAL for public datasets, as we can test these without access to the data files.

@sg-wbi sg-wbi changed the title Fixes #59 - Add CEI dataset Closes #59 - Add CEI dataset May 9, 2022
@mariosaenger mariosaenger self-assigned this Oct 28, 2024
@mariosaenger
Copy link
Collaborator

@phlobo I revised the implementation of this dataset. Please have a look at it.

@phlobo
Copy link
Collaborator

phlobo commented Oct 30, 2024

@mariosaenger

I noticed there are some duplicate labels per document:

{'id': '10022290',
 'document_id': '10022290',
 'text': '...',
 'labels': ['Biomonitoring--exposure biomarker--blood--cord blood',
  'Biomonitoring--exposure biomarker--mothers milk',
  'Biomonitoring--exposure biomarker--blood--cord blood',
  'Biomonitoring--exposure biomarker--mothers milk',
  'Biomonitoring--effect marker--physiological parameter']}

This way, the label statistics don't match the ones reported in the paper: e.g., there are 1467 instances of Biomonitoring--exposure biomarker--urine vs 784 in the paper.

I'm not sure I entirely understand the syntax of the source dataset labels (e.g., https://github.com/sb895/chemical-exposure-information-corpus/blob/master/labels/10022290.txt), but duplicate removal after parsing the labels might already do the trick.

text_files = sorted(list(base_dir.glob("./text/*.txt")))

if self.config.schema == "source":
# TODO: yield (key, example) tuples in the original dataset schema
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove TODO comments

yield key, example

elif self.config.schema == "bigbio_text":
# TODO: yield (key, example) tuples in the bigbio schema
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove TODO comments

with open(label_file, encoding="utf-8") as fp:
label_text = fp.read()

labels = [line.strip(" -") for line in LABEL_REGEX.findall(label_text)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This results in many duplicate labels. Maybe just wrap it in a set?

_DESCRIPTION = """\
The Chemical Exposure Information (CEI) Corpus consists of 3661 PubMed publication abstracts manually annotated by \
experts according to a taxonomy. The taxonomy consists of 32 classes in a hierarchy. Zero or more class labels are \
assigned to each sentence in the corpus.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the corpus does not really contain "sentences", but I guess the description was copied from the original source...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create a dataset loader for CEI
3 participants