Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Active Learning sampling quality #983

Closed
tonca opened this issue Mar 18, 2022 · 6 comments
Closed

Active Learning sampling quality #983

tonca opened this issue Mar 18, 2022 · 6 comments

Comments

@tonca
Copy link

tonca commented Mar 18, 2022

After using the dedupe library for a while in the context of video content reconciliation, we encountered some situations where the Active Learning sampling is very poor. This makes it difficult to built a good training set for the classifier and by consequence the reconciliation results are poor as well.

For instance, we made some tests and tried to reconcile 2 well known public data provider (iMDB and TMDB), which contain reciprocal references to be used as ground truth and good metadata, plus we could build the dataset knowing that all entries could be reconciled in a many-to-one approach (set 1 is contained in set 2 -> 100% recall is theoretically possible).

We tried to reconcile episodes and used few fields in the process (episode title, series title, season number, episode number, series year). The Active Learning sampling were quite balanced between positive and negative examples, therefore it has been quite effortless to collect 10 samples of positive and negative pairs. The final results were quite satisfying as well: recall 78%, precision 98%. Moreover, by scrolling through the results, we noticed that the model learned to ignore the episode title field, which were not consistent between datasets.

Afterwards we decided to perform a second test by removing the episode title field, but keeping everything else as in the previous test (same dataset, same configurations). This time the Active Learning sample were quite poor: almost all pairs were wrong (it took more than 200 pairs to obtain 8 positive). The final reconciliation in this case were also poor: recall 15% and precision 91%.

I would ask then if it is possible to mitigate this kind of issues:

  1. Is it important to balance the active learning pairs? in the second test we fed 200 negative vs 8 positive pairs. Can this be the cause of a low recall?
  2. How is it the model explainable? How would you suggest to investigate bad reconciliation results in general?
  3. Do you have any idea of which are the possible causes of bad sampling in this specific test case?

Thank you for your great work,

Antonio

@fgregg
Copy link
Contributor

fgregg commented Mar 18, 2022

can you try the better sampling branch and let me know if that is giving you better results?

@tonca
Copy link
Author

tonca commented Mar 21, 2022

Hi again,
I tried to run it but I am getting an error. I am not sure how to fix it.

'RecordLinkBlockLearner' object has no attribute 'candidates'
  File "/home/ubuntu/dedupe/dedupe/labeler.py", line 218, in candidate_scores
    labels = self.predict(self.candidates)
  File "/home/ubuntu/dedupe/dedupe/labeler.py", line 374, in pop
    probabilities = learner.candidate_scores()
  File "/home/ubuntu/dedupe/dedupe/api.py", line 1143, in uncertain_pairs
    return [self.active_learner.pop()]
  File "/home/ubuntu/dedupe/dedupe/convenience.py", line 44, in console_label
    uncertain_pairs = deduper.uncertain_pairs()
  File "/home/ubuntu/kf-reconciliation-on-premises/bin/dedupe_reconciliation.py", line 64, in reconcile
    dedupe.console_label(linker)
  File "/home/ubuntu/kf-reconciliation-on-premises/bin/dedupe_bootstrapping.py", line 58, in main
    reconciled_df = dedupe_reconciliation.reconcile(
  File "/home/ubuntu/kf-reconciliation-on-premises/bin/dedupe_bootstrapping.py", line 80, in <module>
    main()

@fgregg
Copy link
Contributor

fgregg commented Mar 21, 2022

Looks like you are doing record linkage and not deduping, i haven't updated the code for that code path yet.

@tonca
Copy link
Author

tonca commented Mar 21, 2022

Yes, exactly. I'll wait then, thank you.

@fgregg
Copy link
Contributor

fgregg commented Mar 29, 2022

@tonca, the better_sampling branch has been updated for record link #982

@fgregg
Copy link
Contributor

fgregg commented Apr 12, 2022

closing for now due to lack of feedback.

@fgregg fgregg closed this as completed Apr 12, 2022
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 29, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants