Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Think about adding GROBIT for "PDF to BibTeX" functionality #327

Closed
koppor opened this issue Mar 12, 2018 · 4 comments
Closed

Think about adding GROBIT for "PDF to BibTeX" functionality #327

koppor opened this issue Mar 12, 2018 · 4 comments

Comments

@koppor
Copy link
Owner

koppor commented Mar 12, 2018

See https://arxiv.org/abs/1802.01168

Bibliographic reference parsing refers to extracting machine-readable metadata, such as the names of the authors, the title, or journal name, from bibliographic reference strings. Many approaches to this problem have been proposed so far, including regular expressions, knowledge bases and supervised machine learning. Many open source reference parsers based on various algorithms are also available. In this paper, we apply, evaluate and compare ten reference parsing tools in a specific business use case. The tools are Anystyle-Parser, Biblio, CERMINE, Citation, Citation-Parser, GROBID, ParsCit, PDFSSA4MET, Reference Tagger and Science Parse, and we compare them in both their out-of-the-box versions and tuned to the project-specific data. According to our evaluation, the best performing out-of-the-box tool is GROBID (F1 0.89), followed by CERMINE (F1 0.83) and ParsCit (F1 0.75). We also found that even though machine learning-based tools and tools based on rules or regular expressions achieve on average similar precision (0.77 for ML-based tools vs. 0.76 for non-ML-based tools), applying machine learning-based tools results in the recall three times higher than in the case of non-ML-based tools (0.66 vs. 0.22). Our study also confirms that tuning the models to the task-specific data results in the increase in the quality. The retrained versions of reference parsers are in all cases better than their out-of-the-box counterparts; for GROBID F1 increased by 3% (0.92 vs. 0.89), for CERMINE by 11% (0.92 vs. 0.83), and for ParsCit by 16% (0.87 vs. 0.75).

@tobiasdiez
Copy link
Collaborator

Sadly, it is not easy to integrate GROBID directly with JabRef, see kermitt2/grobid#250. We could run GROBID on a webserver, upload PDFs, analyze them on the server, post result back to JabRef.

@tobiasdiez
Copy link
Collaborator

Alternative: http://excite.west.uni-koblenz.de/excite

@koppor
Copy link
Owner Author

koppor commented Sep 2, 2020

GROBIT is in place for references from plain text: #327

@koppor
Copy link
Owner Author

koppor commented Sep 22, 2021

Fixed by JabRef#7947

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants