-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Duplicates and multiple versions of samples #10
Comments
Hi @pfischer-nvidia ! Thanks for your interest in the dataset, and for going through the corpus in detail! Our goal was to release the corpus as a v1 exactly so we can get community input about quality issues, and so this input is super helpful. I will go through this in more detail soon, but wanted to get back to you with some quick answers ASAP:
For a strict definition of "subset", they aren't true subsets, and they aren't intended to be --- my apologies for the confusing naming. If you imagine a document with images, some of them contain faces, and some of them don't. If you simply remove the images with detected faces, the resulting image-text alignment might not have as high-of similarity compared to if you re-ran the assignment procedure. So, we remove images with detected faces, and then re-run the assignment algorithm, which might result in different assignments globally.
These are also not true subsets for a strict definition of subset. As described in the paper, there are additional filters we apply that can affect which images are available within each document: these include document thresholds (like min/max number of images/sentences), but also things that can affect within-document properties like more strict deduplicaiton that (as mentioned in the paper) can create some false positives which are discarded.
This is something we are aware of, and is a concern with lots of pretraining datasets out there. Our assumption was that the deduplication efforts of c4 were sufficient for us to not run deduplication but we have also recently realized a small number of duplicate URLs. We removed a /ton/ of duplicate images from our original 1.4B set, but it looks like we missed these in v1 of the release. We'll check it out with your findings. |
Hi @pfischer-nvidia --- thanks for this report! Along with fixing some of the alignments mentioned in #11 , we are working on a v1.1 of the corpus now which aims to address the ~1% duplicate url issue. |
Thanks. Are you going to make the samples unique wrt. the URL? |
Oh and one more question: A large part of the images referenced in the dataset is not available on the internet anymore. Would it be possible to get these from you? |
I am closing this issue as resolved by #13 --- the update we made was to do probabilistic deduplication such that, in expectation, each URL appears once. But, if you want a more strictly url deduplicated set, you can discard any docs marked by the new to answer your questions:
|
Dear authors,
while processing the MMC4 dataset, we found some anomalies and we hope you can comment on or explain these.
Our Expectations
mmc4
) that includes samples with face detections and there are several subsets of that large dataset that have been filtered:mmc4-ff
) (public)mmc4-core
)mmc4-core-ff
) (public)mmc4-core-ff
would also be contained inmmc4-ff
etc.Our Findings
We found that
Exact Duplicates
At first, we matched samples by the MD5 hash of the JSON string to find exact duplicates.
For example for
mmc4-core-ff
, we found 5598117 total samples (i.e. json lines) among all shards, but only 5506430 unique samples.This means that 1.6% within that subset are exact duplicates.
Other Duplicates
If we match just by the document URL string, the duplicate rate is higher, in the case of
mmc4-core-ff
we then obtain only 5492699 unique samples, so 1.9% are duplicates.Interestingly, the duplicates appear not just twice but up to 88 times each.
Here are the top ten duplicate URLs with the number of appearances:
We took a closer look at the first sample with 88 duplicates and found that 87 of those are exact duplicates but 1 is slightly different.
For that 1 sample, the image similarities and the similarity matrix are different altough the text and images match with those of the other 87 samples.
Faces vs. No Faces
We assumed that fewer faces dataset is simply a filtered version of the sets with faces.
We filtered the set with faces ourselves, keeping only the samples that have
face_detections: None
.However, this does not result in the same set as the published fewer faces set.
This effect is related to the similar but slightly different samples mentioned above.
One example is this:
Compare
mmc4_core_faces/docs_shard_4943_v3.jsonl.zip
sample 113 withmmc4_full_faces/docs_shard_4943_v2.jsonl.zip
sample 1523.Both have the same URL and the core set should be a subset of the full set. However, the second sample contains an additional image with face detections, while all other images contain no face detections.
Questions
The text was updated successfully, but these errors were encountered: